Powered by Blogger.
RSS

Why cloud native transformation is about people as much as technology for Ticketmaster

Cloud native is a term that is increasingly used to describe the combination of operations tools such as containers with micro service architectures, resulting in applications that are suited to distributed cloud infrastructure. The idea is that it makes for more resilient and flexible software.

Although much of the concept centres around technology, Ticketmaster executive program director, Bindi Belanger, says a change in the way teams work is equally important.

“Cloud native transformations within your company require major cultural transformations,” Belanger told attendees at Cloud Native Con in Berlin this week. “Everything from how you define and set goals, how your leadership views the importance of outcomes over outputs, then the skill sets and teams and the way that you organise the work.

“I would argue that the way that you organise your delivery of cloud native solutions is just as important as the technology choices that you make.”

The company has realised some significant benefits from its cloud native and devops project. Having relied on outdated systems and processes in its tech operations division, Ticketmaster is now able to provide its developers with infrastructure in a matter of minutes.

“Because of cloud native solutions we have gone from several months to deploy new infrastructure and environments, to when we were in the middle of our devops transformation we got down to a few weeks. And now with cloud native it is just a matter of minutes,” she said.

There have been similar benefits around the frequency of software releases. In the past, the level of coordination required between ops, application support teams and software teams meant that releases didn't happen very often, she explained. In some cases just once every few months.

“With devops we finally got to a more weekly delivery culture and then with cloud native teams were able to release new features as often as they need during the day.”

Legacy systems

Two years ago, however, this was not the case for Ticketmaster.

The company was founded in 1976 and launched its online ticket sales service in 1996. It has since grown substantially, joining with Live Nation in 2010 and now provides a range of services - such as producing concerts - in addition to its core business, with revenues of $7.6 billion.

It is a large, technology-intensive organisation. One of its main challenges is handling huge volumes of traffic on its network that spike when tickets for major acts go on sale. This requires its systems to scale up to handle 150 million transactions in minutes in some cases.

“We invite the entire world to come DDOS our website every time we have a major artist on sale,” Belanger explained.

Supporting its ticket sales are 21 different ticketing systems, which include over 250 different products and services. To support its operations it has relied on a mix of new and legacy technologies amassed over decades. “To build and maintain those products and services we have an organisation of over 1,400 people globally and they build that software on our private cloud, which is about 20,00 virtual machines across seven global data centres.”

Belanger said that its infrastructure is large and complex, and has relied on legacy systems. “We jokingly refer to the tech stack as the tech museum, because we have software from every era,” she said.

Infrastructure bottlenecks

With a diverse business, Ticketmaster has numerous competitors. This places huge important on the ability to move quickly to create reliable software that supports the wider business. Previously, its legacy systems and outdated organisational processes created a bottleneck to new developments.

“We have a lot of competitive pressure across a large market surface areas, [but] we have legacy tech which [was] not ready for containers or public cloud,” she said.

The effect was to hold the business back from developing new services, with more time focused on maintaining the stability of legacy systems. “We were spending a lot of our time on constant firefighting, which meant that we had very limited resources to work on projects to add new value and new features to our development teams. 

“Those challenges made it very difficult or our developers to work with tech ops,” she said, adding that, because of the complexity of the tech stack they were highly dependent on operations, and “didn’t have a lot autonomy themselves.”

“To get a new app deployed or a new environment built out, if we didn’t have capacity on our private cloud…it often took several months, especially if it required purchasing additional hardware to build out our prem private cloud.”

Devops transformation

Two years ago the company started to make changes to its technology operations teams, and began to adopt a Devops approach.

“We realised that we need to become much more lean and create autonomous teams,” she said.

There were challenges here too. The company grew its developer team by 250 percent, but did not expand its operations team at the same pace. “Because ops didn't scale to match the growth of the develop organisation, eventually all roads led to being blocked by operations. So while we got faster at developing, we didn't didn't get faster at delivering value.”

This was improved by mixing its systems engineers with product delivery teams. “By removing that organisational silo, by taking them out of ops and putting them with those people that needed to make those changes, we were hoping to really get out of those barriers.”

Software automation tools also helped streamline processes.

“The goal of all this was to create delivery teams that were self sufficient. Their jobs would be build software, run it, own it operate it, optimise and monetise it.”

Moving to the cloud

The decision to move Ticketmaster’s data centre infrastructure into Amazon Web Services was a key part of the transition too.

“We don't need to spend a lot of time and money building out infrastructure to be always on,” Belanger said. “We wanted infrastructure that was on demand and scalable. But most importantly the decision to move to the public cloud was to force modernisation of our products and services to cloud native standards.”

She added that there were numerous operational advantages from moving to the cloud. “The benefits of moving to the cloud are clear,” she said. “Not just infrastructure resources like compute and storage, but how we are using our human resources.

“If your teams are spending all their time building and maintaining and upgrading infrastructure they are not spending time adding value and helping development tams move faster.”

The goal was to increase speed. “We wanted to shift our leaders to focusing on not changing things, towards taking calculated risks so that we could enabled speed and continuous delivery, which is another way of saying constant change.”

Cloud native teams

“Our decision to move to the public cloud was a decision to become a cloud native company,” said Belanger.

A variety of measures were put in place to create more efficient technology operations:

  • ‘Tech maturity’ and ‘team maturity’ models were put in place as way to measure effectiveness and target improvements. "We wanted to be able to define and measure team performance and technical performance objectively.”
  • Data was used to inform decisions around operational changes. The business started “publishing telemetry on everything from uptime to failed maintenance”. “We were working towards creating culture where change was normal and not something to fear,” she said.
  • Smaller, more agile teams were created. “In order to create the ideal cloud native teams we realised that smaller was better. Two to five people teams have proven to be really successful. Two people to a team might seem a little strange, but we found that having fewer people focus on the same problem allowed us to move faster.”
  • New staff were recruited to support the cloud native approach.  “We wanted people that had developer background, which is something that is not easy to find if you are looking that are also really familiar with infrastructure.
  • “We wanted problem solvers. We didn’t want people who were like ‘we have done this for the past five years lets keep doing it, the status quo is fine’. We wanted people that were constantly looking to drive and embrace change.”

Overall, the changes mean that the tech teams could move faster.

“Instead of having a weekly iteration or month long iteration, every morning the team will get together and say what are we delivering today and at the end of the day you are asked to demo the value that you delivered. So we get away from the two week iteration, or the end of the quarter we will have something delivered, we would like to see what we are going to get done each done.”

Kubernetes containers

Of course, investment in new technologies also played a key role in the operations changes.

New tools were adopted, deploying Kubernetes container orchestration with CoreOS Tectonic. Prometheus monitoring and Helm packaging tools were also added.

Belanger said with the rapid changes in operations technology means that the team has to prepared to quickly adopt new systems.

“You can’t stick to a single framework and say this is the box we are going to live in and we must live in that box,” she said.

Kubernetes has helped create applications that are much easier to update, she said.

“One of the great use cases that we have seen is the new Ticketmaster web platforms that was built on Kubernetes. It is still in its beta phase now. Before Kubernetes, even though we were a modern team with great lean practices, we were building on new technology, it still took about 20 minutes or so to deploy, with low confidence - it would often run into issues.

With Kubernetes “fully automated updates that can happen within a minute. And because of that it helped to enable our daily delivery culture”.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Kimberly-Clark opts for Anaplan over Workday for predictive workforce planning

Kimberly-Clark, the company which manufactures a huge amount of paper-based consumer products, including the Kleenex and Huggies brands, turned to Anaplan over rival Workday when it wanted a simple headcount two years ago, and has since expanded into more predictive workforce planning.

Now that its models are in place on the Anaplan platform the company is able to better project retirements and employee churn, and the tool is already being eyed up by other departments like finance and supply chain for its powerful capabilities.

Read next: How Anaplan plans to kill off Excel use within the enterprise

At Anaplan's Hub conference in San Francisco this week Jim Miranda, manager of global workforce planning and analytics at Kimberly-Clark, told Computerworld UK that he turned to the tool around two years ago when he was tasked with giving the chief financial officer (CFO) an accurate headcount following the spin-off of what is now Halyard Health.

Before Anaplan, Miranda had to contend with departmental headcount being stuck either in Excel spreadsheets or even on paper. "We had one group in Chicago that I will never forget. They had their personnel plan in a spiral notebook and I was like: 'can I at least get a photocopy?' I mean how am I supposed to aggregate this data across the organisation if it is in a notebook?" So Miranda was tasked with aggregating all of this data into a single model within Anaplan.

Anaplan vs Workday

Kimberly-Clark initially adopted Anaplan in its Russian unit back in 2013 for financial planning and analysis (FP&A) and it was soon adopted enterprise-wide for the same purpose, meaning Miranda inherited the tool when he joined.

When asked why he didn’t use fellow cloud SaaS vendor Workday’s own workforce planning capabilities, considering Kimberly-Clark was an existing customer of Workday’s core HR system, Miranda said: "We looked at the Workday tool, because they profess to have the capability", but that it "wasn’t ready for prime time and is still not".

So Miranda ended up opting for Anaplan because "we found from a truly planning and forecasting perspective, Anaplan was the one tool which was flexible enough to do what we needed, and frankly it was cost-effective."

Read next: Workday CEO says he considered buying UK cloud firm Anaplan, before building a rival product

He added: "For me, Workday has done a very good job at what they do but I think in a lot of ways they have become unfocused on their core differentiators and are trying to incorporate all of these things in order to be a one-stop shop. Workday was started by the old PeopleSoft folks and I think they have got into that PeopleSoft habit of trying to be all things to all people."

Kimberly-Clark also has recently implemented Workday Recruiting, a move Miranda is clearly not a huge fan of. "From my perspective, I will tell you that we went from Taleo, which is absolutely best-in-class, to Workday which we can suffer through and make work," he said.

Use cases

Kimberly-Clark has expanded the way it uses Anaplan over the past few years now that the core model is in place, and one thing the tool helped with was more accurately predicting retirement risk in the United States.

"Our average retirement age for a skilled worker is 60.1 years, something like 70 percent of our skilled tradesmen are over age 55, so within five years 70 percent of that workforce is going to go away, which was a scary number," Miranda said.

"What we have been able to do with Anaplan is building predictive models that tell us not just that somewhere in this five-year window people are going to retire but take data from past retirements and profiles to identify that this specific person's risk of retirement is 37 percent this year and will be 48 percent next year."

This gives operations the chance to better identify who they will need to replace in the short term.

Read next: Cote Brasserie moves from 'herculean planning' to monthly budgeting cycle with adoption of Anaplan

Kimberly-Clark has also been able to clean up the open positions across the whole business after Anaplan allowed them to better audit their headcount.

Miranda explained: "Kimberly-Clark has 45,000 employees and the first time we added open positions to our model we had 19,000. We looked at that and said no way do we have 65,000 needed positions, so something is not right."

That led to a data clean up which did away with 16,000 of those open positions, which "has sped up the Workday system and people now have confidence that positions are actually open and not just noise in the system," Miranda said.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Cloud Native Computing Foundation unveils new members and projects as container market booms

The Cloud Native Cloud Foundation (CNCF) unveiled new additions to its member programme at its user conference in Berlin this week, as well as adding new technologies — containerd and rkt — to its growing portfolio of open source projects. 

The not-for-profit organisation, which operates as part of the Linux Foundation, was set up little more than a year ago to focus on the development of open source tools relating to the fast growing area of 'cloud-ready' applications. These are typically built around microservice architectures and make use of tools such as containers to make software development and deployment faster and simpler.

It is an area which has been popular with startups creating new, greenfield applications. Larger enterprise firms are also taking the approach, predominantly for new software developments, but increasingly for older apps too, which are being refactored to run easily in the cloud.

Unsurprisingly, there are many tech vendors attracted to the fast growing market. A recent report from 451 Research estimated that the application container market alone was worth $762 million in 2016, and is set to reach $2.7 billion in 2020. With a CAGR of 40 percent, 451 says that it is now tracking 125 vendors operating in the container space.

CNCF has seen a range of tech firms join its project since launching, with Cisco, CoreOS, Docker, Fujitsu, and Google on the governing board. At CloudNativeCon + KubeCon Europe, Dell Technologies was announced as the latest vendor to join CNCF as a platinum member. 

"They are making major investments in cloud storage, with their REXRay storage project," said CNCF executive director, Dan Kohn. SUSE also joins as a gold member, while there were four new silver members: Solinea, HarmonyCloud, QAware and TenxCloud. There are now 81 members in total.

New partnerships

Kohn revealed that Docker's core container runtime, containerd, will be supported as an incubating project by CNCF. Docker had announced that the project would be contributed to a "neutral" foundation earlier this year. It is the latest in a number of systems open sourced by Docker, beginning with libcontainer in 2014.

"It is a really natural partnership with Kubernetes, gRPC, Prometheus and our other projects," said Kohn.

CoreOS' rkt container engine was also accepted to the foundation. Introduced in 2014, rkt now has 178 contributors, over 5,000 commits and 59 releases.

"With Containerd and Rocket it is clear the CNCF is really the focal point for containerisation, and we are incredibly excited to have that market leadership and now be able to dedicate a huge amount of resources and hopefully help accelerate the development of those projects," Kohn said.

The new additions join monitoring tool Prometheus, OpenTracing, and logging system Fluentd, as well as container orchestration platform Kubernetes, originally developed at Google.

Kubernetes 1.6

The latest version of Kubernetes was announced ahead of CNCF's event, with a range of new features including support for 5,000-node clusters. Kubernetes federation also enables users to scale beyond this level or spread across multiple regions or cloud, combining multiple clusters.

"The theme for this release is multi-teams, multi-workloads at scale. It is a result of 5,000 commits, 275 authors from everywhere around the world," said Aparna Sinha, Kubernetes senior product manager at Google.

There were also additions around role-based access control to address security concerns and dynamic storage provisioning.

Sinha highlighted the core role that Kubernetes has been playing in popularising containers and cloud-native technologies more generally through an open source approach. "[Kubernetes] is attempting to redefine how the world runs applications on distributed systems and we believe that this is only possible through an open and transparent and diverse community of users and contributors."

CNCF

It is clear that more and more organisations are adopting containers, and even moving on from development and test to production uses. A 451 Research survey from May 2016 showed that of the 25 percent of organisations polled which are using containers, 34 percent were in "broad implementation" of production applications, and 28 percent had begun initial implementation in production.

It is still early days for CNCF, and there are some questions around the maturity of container technologies among more traditional enterprises, but it along with a range of related open source foundations such as Cloud Foundry, OpenStack and the Open Container Initiative is helping to develop the technology to achieve the growth predicted.

CNCF's Dan Kohn highlighted the growth of its European event, with 1,500 attendees in Berlin a significant increase on the 500 attendees at the KubeCon event in London last year. "This is testament to the excitement around Kubernetes and cloud-native in general," he said.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

New Anaplan CEO Frank Calderoni lays out his vision for the SaaS unicorn as it prepares for an IPO

The new CEO of the cloud-based planning and forecasting Software-as-a-Service (SaaS) company Anaplan is only nine weeks into the job but has already laid out how he plans to take the company from unicorn-status to a potential IPO this year.

The ex-Cisco and Red Hat CFO Frank Calderoni arrived at the company after a search which took nearly a year to complete following the abrupt exit of Frederic Laluyaux in April last year.

He has been tasked with continuing the strong growth the SaaS company has been seeing since being valued at over $1 billion in January 2016 following a $90 million (£72 million) funding round. Anaplan says that it added a record number of customers in 2016 — 250 to be exact — and reported a 75 percent rise in subscriptions, accounting towards $120 million (£96.5 million) total revenue.

This week in San Francisco the new CEO got the chance to lay out his vision for the company on stage at Anaplan’s annual Hub event.

Read next: Anaplan: What does the UK’s latest tech unicorn actually do? We talk to Anaplan to find out how it is changing enterprise resource planning (ERP) software

Described as "a CEO in waiting" by his old Red Hat boss Jim Cramer, Calderoni started out by saying that it felt "natural for me to be here if I look at my background" and that he was excited by the opportunity at Anaplan because the software has the potential to solve the same challenges he had encountered throughout his career, namely: "Data and how we get data, and how we get that information connected and driving decisions."

In terms of putting the company's current strategy into a soundbite, Anaplan is fond of referring to what it does as "connected planning". So instead of various disconnected Excel spreadsheets dotted throughout an organisation, employees can collaborate on a single data model in real time.

The new CEO has fully bought into the message, saying: "Imagine a technology which allows everybody to have access to the best information to make decisions quickly, easily and boldly, this is the world I want to live in and why I came to Anaplan."

Three-pronged approach

During his keynote, Calderoni laid out the three tenets of his strategy for the company, and they will sound familiar to anyone well-versed in the SaaS lexicon here in Silicon Valley: customer first, innovative technology and community.

Calderoni told Computerworld UK that although many SaaS companies say they are customer first, many are not. The way he intends to make sure that Anaplan is different is by making it part of the company culture.

"Being customer first has a huge cultural element to it," he explained. "If people look at their roles as just a job description, and are too narrow, and don’t understand the bigger purpose of why they are at Anaplan, that is a cultural thing, and one that every company can do more of."

"What I learned over the years is that companies that put their customers at the centre of their strategy are the companies that win, that have the longevity and strength in the market to be successful over time," he added. "So we kicked off an initiative called customer first, which is internal to us. All for the benefit of meeting [customer] needs, requirements, and expectations."

Read next: How Anaplan plans to kill off Excel use within the enterprise

When it comes to technology, Calderoni doesn't want Anaplan resting on its laurels as a leader in the planning software space, especially with big name rivals like Workday starting to encroach on their territory.

"Yes we need to make continued investment to add on some of the requests our customers have and that is where I want to stay focused," he said. "We also have to be aware of how technology will evolve and make sure that is part of our long-range roadmap. [One] of the things that come up is the open platform, so allowing more connections and feeders in."

Calderoni also wants to ensure that Anaplan keeps up-to-date with the latest technology, saying that his CTO Michael Gould will continue to look into "how technology evolves around AI and machine learning and if we could evolve into capturing some of that into our platform, potentially."

Lastly, there is an increased focus on community, building out the existing Anaplan portal, adding user groups and continuing to grow events like Hub.

Calderoni saw the importance of community during his time at Red Hat, saying: "I learned a lot with my Red Hat experience as it relates to open source community and the value of that. In the community itself, where you have users of your technology connecting and sharing best practice, ideas, code, it makes you advance much faster as you bring in all that expertise and select the best."

IPO preparations

In terms of preparing Anaplan for a highly-anticipated IPO, Calderoni wouldn't be pushed on a timeline, but he does believe the market is showing green shoots and that Anaplan is well positioned to take advantage.

"Do we feel like we can reach a point where we are ready for an IPO? Yes. Do we also feel that we are on our way to profitability? Yes, but there are different steps that have to happen along that journey. I think we are well positioned right now for both." Calderoni cited the customer growth and uptick in subscriptions in 2016 as markers that the company is on that path to profitability.

"We did announce last year that for the first half of the fiscal year we were cash flow-break-even, which shows we have a level of frugality and that investments are made for a return and that is a good thing," he added.

In terms of the market, Calderoni says he has been buoyed by the recent float of MuleSoft and the impending Okta IPO. "We are starting to see some success [in the market] which a year ago wasn’t the case, so that is one of the factors we look at in terms of timing.

"So is the market able? Is there willingness and appetite to invest in software companies that have the kind of trajectory that we have? That is something we continue to look at."

Read next: Workday CEO says he considered buying UK cloud firm Anaplan, before building a rival product

Calderoni is also conscious of chasing growth too aggressively though, saying: "We also have to realise where we are, we are a startup company working with customers that have a significant amount of demands on us, so I want us to stay focused. We can spread ourselves too thin too fast and then not necessarily accomplish the goals our customers need and we need long-term."

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

Best cloud management tools for business 2017: How to manage your cloud computing usage and costs

Most cloud computing platforms run a pay-as-you-go model and this can make managing the general usage and costs difficult. We have compiled a list of cloud computing management tools for businesses that aim to manage costs, usage and ultimately optimise the cloud.

Here are some of the best cloud management tools for business.

Have we missed one you like? Let us know.

Read next: Microsoft Azure vs Amazon AWS public cloud comparison: Which cloud is best for the enterprise?

1. Cloud management tools: Cloudability

Cloud management tools: Cloudability

Cloudability provides 'data-driven cloud cost management', using detailed metrics to offer businesses insight into their cloud spending.

The software monitors cloud usage and generates budget alerts and daily email reports to keep businesses in the loop with their finances. 

Cost: Custom pricing available.

2. Cloud management tools: Skeddly

Cloud management tools: Skeddly

Designed to automate Amazon Web Services, Skeddly aims to schedule backups, snapshot your EC2 instances and RDS databases, automate devops and even reduce your overall costs.

Interestingly, in terms of security, Skeddly assumes no permissions or access to your AWS account, and it will actually create a custom IAM policy for you, so all permissions can be controlled.

Cost: Pay as you go.

3. Cloud management tools: Qstack

Cloud management tools: Qstack

Qstack aims to reduce licensing costs by consolidating various cloud management platforms. Qstack is compatible with AWS and Azure and any EC2 compatible IaaS clouds.

What's more, Qstack works with KVM, VMware and Hyper-V while also being able to unify physical hardware, hypervisors, virtual machines, true bare metal and containers across multiple clouds and internal data centres.

Cost: Available on request.

4. Cloud management tools: CloudRanger

Cloud management tools: CloudRanger

CloudRanger makes it easy to backup AWS' Elastic Compute Cloud (EC2) and Relational Database Services (RDS) databases. CloudRanger even claims to reduce your AWS EC2 costs by up to 65 percent, by shutting down areas that are not currently in use.

Via a simple dashboard users can manage all AWS integrations, tasks, schedules and multiple users.

Cost: Between $29 (£23) and $249 (£200) a month, with a free trial available.

5. Cloud management tools: ParkMyCloud

Cloud management tools: ParkMyCloud

ParkMyCloud claims to reduce cloud computing costs by up to 60 percent in 15 minutes, through its scheduling on/off times and only charging customers for the computing power they actually use.

Users will be able to manage multiple AWS and Azure accounts and save time with automated schedules and policies with relative ease.

Cost: $29 (£23) per month

6. Cloud management tools: CSC Agility Platform

Cloud management tools: CSC Agility Platform

CSC's Agility Platform manages the entire lifecycle of data in the cloud from planning, designing and consumption to the total cost of operation of a businesses cloud-based services.

CSC also offers workflow automation and automated firewall configuration over both private and public clouds.

Cost: Available on request.

7. Cloud management tools: VMware

Cloud management tools: VMware
Credit: VMWare

VMware's vRealise Business provides users with details about virtual machine running costs and helps manage budgets and resources providing detailed reporting of inefficiencies in users' cloud infrastructure.

Cost: VMware vRealise Business is available on a standalone basis or priced on a per user basis.

8. Cloud management tools: Dynatrace

Cloud management tools: Dynatrace

While Dynatrace application monitoring software is not specifically designed for cloud cost management, it does offer cloud monitoring services within in a dashboard format, aiming to 'eliminate all blind spots'. 

Cost: Free trial and custom pricing available.

9. Cloud management tools: IBM's SmartCloud cost management

Cloud management tools: IBM
© IBM

IBM's SmartCloud provides cost-monitoring with tracking and usage reports also available. SmartCloud also has a tool that tracks costs and business processes against budgets meaning users have greater control over their cloud finances. 

Cost: Custom pricing is a subscription basis.

10. Cloud management tools: Cloud Cruiser

Cloud management tools: Cloud Cruiser

Cloud Cruiser uses analytics and smart tags to illustrate a business’s cloud usage and provide suggestions for optimisation. It offers history reports of businesses cloud usage and allows thresholds to be set and users alerted when an overspend is about to occur.

Cost: Pricing available on request.

11. Cloud management tools: Cloudyn

Cloud management tools: Cloudyn

Cloudyn's software monitors cloud resources, making practical suggestions for optimising usage. Cloudyn will also provide cloud 'trend' reports to make sure businesses are not paying more than they should be. All this information is available on a dashboard and via email alerts. 

Cost: Cloudyn offers a free trial with other pricing available on request.

12. Cloud management tools: RightScale

Cloud management tools: RightScale

RightScale aims to bring simplicity to businesses cloud operations and drive visibility with detailed reporting and history tracking. 

RightScale's cloud ROI calculator allows users to determine cost benefits and make informed business decisions. 

Cost: Free trial available and free for up to five users. For more, custom pricing is available too.

13. Cloud management tools: HPE Helion Cloud Suite

Cloud management tools: HPE Helion Cloud Suite

Working across open source, hybrid cloud and multi-cloud environments, the HPE Helion Cloud Suite provides a centralised dashboard to control resources across business infrastructures enabling businesses to build and operate cloud services across their whole organisation.

According to HPE, Air France reduced its time spent provisioning infrastructure by more than 50 percent after implementing HPE hybrid cloud management solutions.

Cost: Available on request

14. Cloud management tools: BMC

Cloud management tools: BMC

BMC has a cloud cost management service called 'truesight capacity optimisation' which aims to bring about cost-effectiveness and cost optimisation. 

BMC's dashboard provides reports that allow users to monitor costs and track past transactions.

Cost: Free trial and custom prices available on request.

15. Cloud management tools: cloudMatrix

Cloud management tools: cloudMatrix

cloudMatrix is from Gravitant, an IBM company and looks after budget, usage and costs through its centralised cost management system. cloudMatrix provides estimated billing which allows businesses to factor in future costs. 

Cost: Custom prices available on request.

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

A rare look at LG quality testing the G6 in a South Korean factory

  • Digg
  • Del.icio.us
  • StumbleUpon
  • Reddit
  • RSS

A faster single-pixel camera

  • Researchers from the MIT Media Lab developed a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens. Examples of this compressive ultrafast imaging technique are show on the bottom rows.

    Researchers from the MIT Media Lab developed a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens. Examples of this compressive ultrafast imaging technique are show on the bottom rows.

    Courtesy of the researchers

    Full Screen
  • Compressed sensing is an exciting new computational technique for extracting large amounts of information from a signal. In one high-profile demonstration, for instance, researchers at Rice University built a camera that could produce 2-D images using only a single light sensor rather than the millions of light sensors found in a commodity camera.

    But using compressed sensing for image acquisition is inefficient: That “single-pixel camera” needed thousands of exposures to produce a reasonably clear image. Reporting their results in the journal IEEE Transactions on Computational Imaging, researchers from the MIT Media Lab now describe a new technique that makes image acquisition using compressed sensing 50 times as efficient. In the case of the single-pixel camera, it could get the number of exposures down from thousands to dozens.

    One intriguing aspect of compressed-sensing imaging systems is that, unlike conventional cameras, they don’t require lenses. That could make them useful in harsh environments or in applications that use wavelengths of light outside the visible spectrum. Getting rid of the lens opens new prospects for the design of imaging systems.

    "Formerly, imaging required a lens, and the lens would map pixels in space to sensors in an array, with everything precisely structured and engineered," says Guy Satat, a graduate student at the Media Lab and first author on the new paper.  "With computational imaging, we began to ask: Is a lens necessary?  Does the sensor have to be a structured array? How many pixels should the sensor have? Is a single pixel sufficient? These questions essentially break down the fundamental idea of what a camera is.  The fact that only a single pixel is required and a lens is no longer necessary relaxes major design constraints, and enables the development of novel imaging systems. Using ultrafast sensing makes the measurement significantly more efficient." 

    Recursive applications

    One of Satat’s coauthors on the new paper is his thesis advisor, associate professor of media arts and sciences Ramesh Raskar. Like many projects from Raskar’s group, the new compressed-sensing technique depends on time-of-flight imaging, in which a short burst of light is projected into a scene, and ultrafast sensors measure how long the light takes to reflect back.

    The technique uses time-of-flight imaging, but somewhat circularly, one of its potential applications is improving the performance of time-of-flight cameras. It could thus have implications for a number of other projects from Raskar’s group, such as a camera that can see around corners and visible-light imaging systems for medical diagnosis and vehicular navigation.

    Many prototype systems from Raskar’s Camera Culture group at the Media Lab have used time-of-flight cameras called streak cameras, which are expensive and difficult to use: They capture only one row of image pixels at a time. But the past few years have seen the advent of commercial time-of-flight cameras called SPADs, for single-photon avalanche diodes.

    Though not nearly as fast as streak cameras, SPADs are still fast enough for many time-of-flight applications, and they can capture a full 2-D image in a single exposure. Furthermore, their sensors are built using manufacturing techniques common in the __computer chip industry, so they should be cost-effective to mass produce.

    With SPADs, the electronics required to drive each sensor pixel take up so much space that the pixels end up far apart from each other on the sensor chip. In a conventional camera, this limits the resolution. But with compressed sensing, it actually increases it.

    Getting some distance

    The reason the single-pixel camera can make do with one light sensor is that the light that strikes it is patterned. One way to pattern light is to put a filter, kind of like a randomized black-and-white checkerboard, in front of the flash illuminating the scene. Another way is to bounce the returning light off of an array of tiny micromirrors, some of which are aimed at the light sensor and some of which aren’t.

    The sensor makes only a single measurement — the cumulative intensity of the incoming light. But if it repeats the measurement enough times, and if the light has a different pattern each time, software can deduce the intensities of the light reflected from individual points in the scene.

    The single-pixel camera was a media-friendly demonstration, but in fact, compressed sensing works better the more pixels the sensor has. And the farther apart the pixels are, the less redundancy there is in the measurements they make, much the way you see more of the visual scene before you if you take two steps to your right rather than one. And, of course, the more measurements the sensor performs, the higher the resolution of the reconstructed image.

    Economies of scale

    Time-of-flight imaging essentially turns one measurement — with one light pattern — into dozens of measurements, separated by trillionths of seconds. Moreover, each measurement corresponds with only a subset of pixels in the final image — those depicting objects at the same distance. That means there’s less information to decode in each measurement.

    In their paper, Satat, Raskar, and Matthew Tancik, an MIT graduate student in electrical engineering and __computer science, present a theoretical analysis of compressed sensing that uses time-of-flight information. Their analysis shows how efficiently the technique can extract information about a visual scene, at different resolutions and with different numbers of sensors and distances between them.

    They also describe a procedure for computing light patterns that minimizes the number of exposures.  And, using synthetic data, they compare the performance of their reconstruction algorithm to that of existing compressed-sensing algorithms. But in ongoing work, they are developing a prototype of the system so that they can test their algorithm on real data.

    “Many of the applications of compressed imaging lie in two areas,” says Justin Romberg, a professor of electrical and computer engineering at Georgia Tech. “One is out-of-visible-band sensing, where sensors are expensive, and the other is microscopy or scientific imaging, where you have a lot of control over where you illuminate the field that you’re trying to image. Taking a measurement is expensive, in terms of either the cost of a sensor or the time it takes to acquire an image, so cutting that down can reduce cost or increase bandwidth. And any time building a dense array of sensors is hard, the tradeoffs in this kind of imaging would come into play.”


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Carbon Black warns of over reliance on 'nascent' machine learning security

    Security vendor Carbon Black has issued a report warning businesses not to place too much trust in machine learning-based security products.

    The company surveyed 400 non-vendor security professionals who overwhelmingly agreed AI-equipped technology is in its nascent phase, and so organisations must proceed with caution when adopting any such products.

    Although AI-based technologies do have their place, it would be a mistake for businesses to buy into vendor hype or over-rely on these systems, the report said.

    The findings echo a recent report from ABI Research that dedicated a significant section to warning against vendors peddling machine learning as 'snake oil'.

    Security professionals cited high false positive rates and the ease with which machine learning-based technologies can be bypassed – at present – as the most serious barriers to adoption.

    Respondents also said that the high false positive rate could have other negative impacts on operations, such as considerable slowdown if a team of researchers finds itself having to sift through and check against each of these.

    Of course, the other side is that there will be plenty of customers who find machine learning-enabled security invaluable, especially in smaller organisations where the security team might be the same as the IT team, where automated processes are especially important.

    See also: Church of England puts a stop to ransomware with Darktrace

    But Carbon Black says that at present, machine learning and artificial intelligence technologies should be seen as a way to augment processes rather than as a wholesale solution.

    According to the report: "AI technology can be useful in helping humans parse through significant amounts of data. What once took days or weeks can be done by AI in a matter of minutes or hours. That’s certainly a good thing.

    "A key element of AI to consider, though, is that it is programmed and trained by humans and, much like humans, can be defeated. AI-driven security will only work as well as it’s been taught to."

    Speaking at a roundtable event in central London, Rick McElroy, security strategist for Carbon Black, said: "The community has said the biggest benefits are this: it augments human decision making. I 100 percent agree with that, it should absolutely allow you to make better decisions. And it learns your company’s security preferences. But here’s the biggest risk – it’s easy to bypass, so people are relying on things that are easy to bypass.

    "False positives could cause you and your team hundreds of hours to go and figure out a false positive, only to end up with: ‘oh, we just wasted a week’s work on a false positive that never existed’."

    According to the research, 70 percent of respondents felt that attackers are able to get past machine learning-driven security products, and a third of respondents claimed it was "easy" to do so.

    See also: Machine learning in cybersecurity: what is it and what do you need to know?

    Carbon Black recommends that security teams looking into using machine learning tools make sure they have the existing data to properly train the technology with. That includes a "massive body of baseline data, a torrent of detonation data, and statistics and comparisons among behaviours for validation" to generate the best patterns of malicious behaviour.

    "I think the important thing to remember with AI is this," said McElroy. "It is a thing we’re all going to start using and will eventually put me out of a job. How far on the horizon that is, I have no idea. But today if you’re solely dependent on AI to make your security decisions you’re going to be in a bad way."

    The report also found a dramatic increase in non-malware attacks since the start of 2016. Carbon Black noticed that almost every one of its customers had been targeted by a non-malware attack throughout 2016, which was part of the reasoning behind commissioning the report.

    A non-malware attack is one that doesn’t place executables on the target endpoint but uses existing software, applications or authorised protocols to carry out the attack. Powershell,  a system administrator tool that is on every Windows box, is a good example.

    "About five or six years ago at Black Hat some researchers said Powershell is going to be the thing and they wrote a tool to leverage Powershell attacks," McElroy said.

    In 2016, these attacks evolved into the Powershell-based ransomware, Powerware. And the Squiblydoo attack was similarly built to wriggle past application whitelisting processes by exploiting existing system tools, where it is then able to run unapproved scripts.

    Respondents told Carbon Black that they had seen some other particularly creative non-malware attacks, including efforts to affect a satellite transmission, impersonating the CSO while trying to access corporate intellectual property, and spoofing login systems so login information was immediately made available to the attacker.

    "Spoofing logins to appear authentic – we call that living off the land," said McElroy. "The best thing I want to do as an attacker is look exactly like your system administrator, and if I can get that level of access I can do what I want for years and you’ll never detect me."

    Some efforts to address non-malware attacks included providing employee awareness training, turning to next-generation antivirus, more of a focus on patching, and locking down personal device usage when appropriate.

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Salesforce self-service platform helps Aylesbury Vale district council save millions

    When the impact of the 2008 financial crisis extended into long-term public sector cuts, Aylesbury Vale District Council (AVDC) Council turned to technology to help it navigate the age of austerity.

    "Five years ago we decided that the world had turned," AVDC CEO Andrew Grant remembers. "Austerity was in its infancy, but we realised that we were going to need to change our business completely because we were a traditional council in receipt of government funds and they looked like they were going to dry out by 2020."

    They answered the challenge by embarking on a sweeping digital transformation strategy dubbed "Right Here Right Now". It began with a website redesign and developed into a complete overhaul of services as the council moved towards a commercially orientated business model.

    At the centre of the transformation is a Salesforce platform known as "My Account" based on the Salesforce Community Cloud Software. It supports a variety of interactions with the council such as automating processes like signing up to the garden waste service or making council tax payments. 

    Five years after embarking on the strategy in 2010, the council had saved £14 million, half of which it estimates was down to the digital overhaul.

    The council chose to pursue a customer-focused business model that could generate income while reducing costs, drawing on private sector practices for inspiration. AVDC began to sell services provided by private sector entities and invest the profits that these generated in services such as homelessness inquiries and social care.

    At the same time, the council was embarking on replacing their static legacy equipment with a cloud system to support a more tailored service for citizens. Grant wanted to create a market for new suppliers that could come in and provide software-as-a-service services.

    The Buckinghamshire organisation became one of the first councils to move to the cloud. Today only one in-house server remains, as the DWP won’t license AVDC to move it offshore.

    Read next: How Salesforce brought artificial intelligence and machine learning into its products with Einstein

    AVDC used the Salesforce platform to replace legacy systems with services that could swiftly be tailored to changing customer demands while also reducing infrastructure costs. 

    "We wanted something that was disruptive, we wanted something that was globally known as a leader in CRM and content management, that let's face it, the private sector used to handle their customer interaction," says Grant.

    The core of the new website was a feature inspired by commercial retail practices called "My Account". It collects all activities with the council in one place and provides self-service functions for users.

    The council ran workshops with local residents to understand their needs. They included applying for benefits checking and paying council tax, signing up for new services and updating personal information.

    "We've got 33,000 households of the 75,000 households signed up to that in just over a year," says Grant.

    "Clearly we've hit a sweet spot where people do want to look at the core things that they're doing with the council and not have to phone us up. We can increasingly give more value and products into My Account as we grow the functionality."

    The council estimates that My Account saved more than 900 hours of employee time within six months of going live. It’s now been running for more than a year, incorporating extra elements along the way such as arranging licences for taxi drivers and ordering bins.

    "The biggest saving was actually getting us to be free from our desks because we can use the web browser for our services anywhere," says Grant.

    "We've had a sharp drop in the number of people visiting customer services because they don't need to, we've had a sharp drop in email and phone traffic – between 10 and 30 percent in each area – so there are fewer interactions because they're not needed.

    "What we're trying to do is prepare ourselves to grow the business later when we can base that on sales and products and services based on this knowledge."

    Citizens as customers

    Grant wants to integrate citizens' data into the cloud so that staff, customers, and partners are all connected and each individual can automatically be offered the specific services they want before they've even thought of it. 

    The new objective is described in a new "Connected Knowledge" strategy as a "digital business service experience within a smarter, data-driven council."

    "Any other person in the private sector would be saying we've got to acquire some customers first, which is your biggest cost," Grant explains. "We've got those customers."

    He hopes to motivate those customers to use the council to find further products and services provided by private companies.

    "Ninety-five percent if not more of our citizens probably wouldn't need us for more than the bins but they're spending money, they're earning money in our area, how do they spend it with us, that's really the business model," he says.

    "Digital isn't really an end itself, it's a way of understanding your customer better so that we can be a better council and more resilient and financially secure."

    AVDC is also experimenting with emerging consumer technology. It recently became the first council in the country to conduct trials with Amazon Alexa, to see how the voice-controlled personal assistant can improve customer support services.

    Read next: How Capital One taught Amazon’s Alexa AI assistant to help you manage your money

    Councils across the country are struggling to provide adequate services due to cuts to their funding that are only set to deepen. Grant believes that the current socioeconomic climate will force them to radically change their established practices, and further draw on practices from the private sector.

    "Most councils are hoping the casual customers go back to the golden age when they would queue up outside and don't mind queuing in the rain, and then fill their name and address in 14 times on bits of Xeroxed paper," says Grant.

    "You've got to try and predict where your customers have got to, and talk about them as customers, not as citizens or people that just have your service.

    "You've got to stop thinking like you're used to thinking, and that's a bit easy to say and hard to do, but I think the results are a slicker and more effective organisation, and one that's sustainable."

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Faster page loads

  • At the Usenix Symposium on Networked Systems Design and Implementation, researchers from MIT’s   Science and Artificial Intelligence Laboratory will present a new system for allocating bandwidth in data center networks.

    At the Usenix Symposium on Networked Systems Design and Implementation, researchers from MIT’s __computer Science and Artificial Intelligence Laboratory will present a new system for allocating bandwidth in data center networks.

    Full Screen
  • A webpage today is often the sum of many different components. A user’s home page on a social-networking site, for instance, might display the latest posts from the users’ friends; the associated images, links, and comments; notifications of pending messages and comments on the user’s own posts; a list of events; a list of topics currently driving online discussions; a list of games, some of which are flagged to indicate that it’s the user’s turn; and of course the all-important ads, which the site depends on for revenues.

    With increasing frequency, each of those components is handled by a different program running on a different server in the website’s data center. That reduces processing time, but it exacerbates another problem: the equitable allocation of network bandwidth among programs.

    Many websites aggregate all of a page’s components before shipping them to the user. So if just one program has been allocated too little bandwidth on the data center network, the rest of the page — and the user — could be stuck waiting for its component.

    At the Usenix Symposium on Networked Systems Design and Implementation this week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are presenting a new system for allocating bandwidth in data center networks. In tests, the system maintained the same overall data transmission rate — or network “throughput” — as those currently in use, but it allocated bandwidth much more fairly, completing the download of all of a page’s components up to four times as quickly.

    “There are easy ways to maximize throughput in a way that divides up the resource very unevenly,” says Hari Balakrishnan, the Fujitsu Professor in Electrical Engineering and Computer Science and one of two senior authors on the paper describing the new system. “What we have shown is a way to very quickly converge to a good allocation.”

    Joining Balakrishnan on the paper are first author Jonathan Perry, a graduate student in electrical engineering and __computer science, and Devavrat Shah, a professor of electrical engineering and computer science.

    Central authority

    Most networks regulate data traffic using some version of the transmission control protocol, or TCP. When traffic gets too heavy, some packets of data don’t make it to their destinations. With TCP, when a sender realizes its packets aren’t getting through, it halves its transmission rate, then slowly ratchets it back up. Given enough time, this procedure will reach an equilibrium point at which network bandwidth is optimally allocated among senders.

    But in a big website’s data center, there’s often not enough time. “Things change in the network so quickly that this is inadequate,” Perry says. “Frequently it takes so long that [the transmission rates] never converge, and it’s a lost cause.”

    TCP gives all responsibility for traffic regulation to the end users because it was designed for the public internet, which links together thousands of smaller, independently owned and operated networks. Centralizing the control of such a sprawling network seemed infeasible, both politically and technically.

    But in a data center, which is controlled by a single operator, and with the increases in the speed of both data connections and computer processors in the last decade, centralized regulation has become practical. The CSAIL researchers’ system is a centralized system.

    The system, dubbed Flowtune, essentially adopts a market-based solution to bandwidth allocation. Operators assign different values to increases in the transmission rates of data sent by different programs. For instance, doubling the transmission rate of the image at the center of a webpage might be worth 50 points, while doubling the transmission rate of analytics data that’s reviewed only once or twice a day might be worth only 5 points.

    Supply and demand

    As in any good market, every link in the network sets a “price” according to “demand” — that is, according to the amount of data that senders collectively want to send over it. For every pair of sending and receiving computers, Flowtune then calculates the transmission rate that maximizes total “profit,” or the difference between the value of increased transmission rates — the 50 points for the picture versus the 5 for the analytics data — and the price of the requisite bandwidth across all the intervening links.

    The maximization of profit, however, changes demand across the links, so Flowtune continually recalculates prices and on that basis recalculates maximum profits, assigning the resulting transmission rates to the servers sending data across the network.

    The paper also describes a new procedure that the researchers developed for allocating Flowtune’s computations across cores in a multicore computer, to boost efficiency. In experiments, the researchers compared Flowtune to a widely used variation on TCP, using data from real data centers. Depending on the data set, Flowtune completed the slowest 1 percent of data requests nine to 11 times as rapidly as the existing system.

    “Scheduling — and, ultimately, providing guarantees of network performance — in modern data centers is still an open question,” says Rodrigo Fonseca, an assistant professor of computer science at Brown University. “For example, while cloud providers offer guarantees of CPU, memory, and disk, you usually cannot get any guarantees of network performance.”

    “Flowtune advances the state of the art in this area by using a central allocator with global knowledge,” Fonseca says. “Centralized solutions are potentially better because of the global view of the network, but it is very challenging to use them at scale, because of the sheer volume of traffic. [There is] too much information to aggregate, process, and distribute for each decision. This work pushes the boundary of what was thought possible with centralized solutions. There are still questions of how much further this can be scaled, but this solution is already usable by many data center operators.”


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Aylesbury council trials Amazon Alexa to streamline customer support services

    Aylesbury Vale District Council (AVDC) has become the first council in the country to trial Amazon Alexa to provide services for citizens.

    The experiments with the voice-controlled personal assistant emerged as part of a digital transformation strategy dubbed "Right Here Right Now" that began in 2010. The council saved £14 million between then and 2015, half of which is estimated to be down to the digital overhaul.

    The core of the strategy is an online community for residents and businesses called My Account. It encompasses customer transactions, real-time data and automated processes in a system based on the Salesforce Community Cloud software. Through the partnership with Salesforce, AVDC approached Amazon to begin trials of Alexa.

    Read next: AWS announces three new AI and machine learning services for customers

    The device can help residents request council services through the My Account platform without them needing to use a computer, by asking Alexa about rubbish collections, for example.

    "In the background, it goes away and creates a record and puts it into Salesforce, and those who are administering Salesforce know Thomas has missed his bin, a bin has been ordered and it’s coming next Wednesday," explains AVDC CEO Andrew Grant.

    The council is developing different skills with public sector digital services provider Arcus Global to use the system as a customer service desk, writing new scripts along the way for whatever residents may request.

    They're currently testing it to help people with diabetes arrange for their insulin pen needles to be collected from home. The collection of such sharps waste involves a specific method of disposal.

    Alexa can ask which colour bin it's been put in and when the resident wants it to be collected. It can then use the information to arrange a specific day on which the council can come.

    Read next: Reward Gateway launches world’s first employee engagement skill for Amazon Alexa

    "It’s taking a lot of the routine tasks and putting them into an AI environment in a sympathetic way," says Grant. "When we do need to put through someone that you need to talk to in a more detailed way, at least we've taken the cost down of the initial inquiry, and put the expert as close to you as possible rather than several steps later in the traditional method."

    Alexa has enjoyed rapid uptake commercially but is still finding its feet in the public sector. Amazon was happy to collaborate with the council to help the company assess the potential applications.

    "They want people to create new apps in the skills and they were very excited that we were playing around with it," says Grant.

    Grant predicts that the experiments with AI could extend beyond connecting people at home to basic council services into adding different databases from other areas of the public sector. Citizens could use Alexa to request new prescriptions from their GP, adding a valuable new interface to vulnerable or lonely residents.

    “It's giving more liberal access to people that are vulnerable or at home or are lonely, or partially satisfied," says Grant. "It gives it's a much more democratic way of getting people services they want, rather than them having to phone us up or even to come in. It's pushing that cost down, but also the value up of other interactions with the council."

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Home secretary Amber Rudd calls for police access to WhatsApp

    Home secretary Amber Rudd is asking that law enforcement should be given access to encrypted messages on WhatsApp and similar services, a demand that is likely to fuel an ongoing debate over whether companies should create backdoors into their encryption technologies for investigators.

    Khalid Masood, the terrorist who killed four people outside Parliament on Wednesday, had sent a message on WhatsApp a little before the attack, according to reports.

    "We need to make sure that organizations like WhatsApp, and there are plenty of others like that, don't provide a secret place for terrorists to communicate with each other," Home Secretary Amber Rudd said on the Andrew Marr Show on Sunday.

    "It used to be that people would steam open envelopes or just listen in on phones when they wanted to find out what people were doing, legally, through warranty," she said "But on this situation we need to make sure that our intelligence services have the ability to get into situations like encrypted WhatsApp."

    Rudd told Sky News that she supports end-to-end encryption and "it has its place to play" but "we also need to have a system whereby when the police have an investigation, where the security services have put forward a warrant, signed off by the home secretary, we can get that information when a terrorist is involved."

    When challenged that this was incompatible with end-to-end encryption, Rudd said: "It’s not incompatible – you can have a system whereby they can build it so we can have access to it when absolutely necessary."

    The official is meeting with internet companies on Thursday to set up an industry board to also address the issue of the take down of terrorist content and propaganda on their platforms. The efforts by tech companies have been inadequate, despite an initiative announced last year, Rudd said.

    WhatsApp, which was acquired by Facebook in 2014, encrypts end-to-end messages, voice and video calls between a sender and receiver that use WhatsApp client software released after March 31, 2016, using the Signal Protocol designed by Open Whisper Systems.

    According to WhatsApp: "This end-to-end encryption protocol is designed to prevent third parties and WhatsApp from having plaintext access to messages or calls. What's more, even if encryption keys from a user's device are ever physically compromised, they cannot be used to go back in time to decrypt previously transmitted messages."

    Rudd's demand for government access to encrypted messages on services like WhatsApp is an echo of a dispute in the US last year between Apple and the FBI, which had asked for assistance from the company to unlock an iPhone used by Syed Rizwan Farook, one of the San Bernardino, California attackers of December, 2015.

    "If I was talking to [Apple CEO] Tim Cook, I would say to him this is something completely different," Rudd said.

    "We're not saying 'open up', we don't want to 'go into the cloud', we don't want to do all sorts of things like that," she added. "But we do want them to recognise that they have a responsibility to engage with government, to engage with law enforcement agencies when there is a terrorist situation."

    The government would do it all through "carefully thought-through, legally covered arrangements," she said, but did not rule out other action to force companies to cooperate.

    WhatsApp could not be immediately reached for comment after business hours. In a statement to news outlets it said it was horrified by the attack in London and was cooperating with law enforcement as they continue their investigations.

    "Compelling companies to put backdoors into encrypted services would make millions of ordinary people less secure online," Jim Killock of Open Rights Group said in a statement. "We all rely on encryption to protect our ability to communicate, shop and bank safely."

    The privacy and free speech advocacy group said it is right that technology companies should help the police and intelligence agencies with investigations into specific crimes or terrorist activity, where possible.

    "This help should be requested through warrants and the process should be properly regulated and monitored," it added.

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    A big leap toward tinier lines

  • These scanning electron microscope images show the sequence of fabrication of fine lines by the team

    These scanning electron microscope images show the sequence of fabrication of fine lines by the team's new method. First, an array of lines is produced by a conventional electron beam process (top). The addition of a block copolymer material and a topcoat result in a quadrupling of the number of lines (center). Then the topcoat is etched away, leaving the new pattern of fine lines exposed (bottom).

    Courtesy of the researchers

    Full Screen
  • For the last few decades, microchip manufacturers have been on a quest to find ways to make the patterns of wires and components in their microchips ever smaller, in order to fit more of them onto a single chip and thus continue the relentless progress toward faster and more powerful computers. That progress has become more difficult recently, as manufacturing processes bump up against fundamental limits involving, for example, the wavelengths of the light used to create the patterns.

    Now, a team of researchers at MIT and in Chicago has found an approach that could break through some of those limits and make it possible to produce some of the narrowest wires yet, using a process with the potential to be economically viable for mass manufacturing with standard types of equipment.

    The new findings are reported this week in the journal Nature Nanotechnology, in a paper by postdoc Do Han Kim, graduate student Priya Moni, and Professor Karen Gleason, all at MIT, and by postdoc Hyo Seon Suh, Professor Paul Nealey, and three others at the University of Chicago and Argonne National Laboratory. While there are other methods that can achieve such fine lines, the team says, none of them are cost-effective for large-scale manufacturing.

    The new approach includes a technique in which polymer thin films are formed on a surface, first by heating precursurs so they vaporize, and then by allowing them to condense and polymerize on a cooler surface, much as water condenses on the outside of a cold drinking glass on a hot day.

    “People always want smaller and smaller patterns, but achieving that has been getting more and more expensive,” says Gleason, who is MIT’s associate provost as well as the Alexander and I. Michael Kasser (1960) Professor of Chemical Engineering. Today’s methods for producing features smaller than about 22 nanometers (billionths of a meter) across generally require either extreme ultraviolet light with very expensive optics or building up an image line by line, by scanning a beam of electrons or ions across the chip surface — a very slow process and therefore expensive to implement at large scale.

    The new process uses a novel integration of three existing methods. First, a pattern of lines is produced on the chip surface using well-established lithographic techniques, in which an electron beam is used to "write" the pattern on the chip.

    Then, a layer of material known as a block copolymer — a mix of two different polymer materials that naturally segregate themselves into alternating layers or other predictable patterns — is formed by spin coating a solution. The block copolymers are made up of chain-like molecules, each consisting of two different polymer materials connected end-to-end.

    “One half is friendly with oil, the other half is friendly with water,” Kim explains. “But because they are completely bonded, they’re kind of stuck with each other.” The dimensions of the two blocks predetermine the sizes of periodic layers or other patterns they will assemble themselves into when they are deposited.

    Finally, a top, protective polymer layer is deposited on top of the others using initiated chemical vapor deposition (iCVD). This top coat, it turns out, is a key to the process: It constrains the way the block copolymers self-assemble, forcing them to form into vertical layers rather than horizontal ones, like a layer cake on its side.

    The underlying lithographic pattern guides the positioning of these layers, but the natural tendencies of the copolymers cause their width to be much smaller than that of the base lines. The result is that there are now four (or more, depending on the chemistry) lines, each of them a fourth as wide, in place of each original one. The combination of the lithographed layer and topcoat “controls both the orientation and the alignment” of the resulting finer lines, explains Moni.

    Because the top polymer layer can additionally be patterned, the system can be used to build up any kind of complex patterning, as needed for the interconnections of a microchip.

    Most microchip manufacturing facilities use the existing lithographic method, and the CVD process itself is a well-understood additional step that could be added relatively easily. Thus, implementing the new method could be much more straightforward than other proposed methods of making finer lines. With the new method, Gleason says, “you wouldn’t need to change all those machines. And everything that’s involved are well-known materials.”

    “Being able to create sub-10-nanometer features with polymers is major progress in the area of nanofabrication,” says Joerg Lahann, a professor of chemical engineering at the University of Michigan, who was not involved in this work. “The quality and robustness of this process will open an entirely new area of applications, from nanopatterning to nanotribology.”

    Lahann adds, “This work is an ingenious extension of previous research by these researchers. The fact that they can demonstrate arbitrary structures highlights the quality and versatility of this novel technology.”

    The team also included Shisheng Xiong at the University of Chicago and Argonne National Laboratory, and Leonidas Ocola and Nestor Zaluzec at Argonne. The work was supported by the National Science Foundation and the U.S. Army Research Office, through MIT’s Institute for Soldier Nanotechnologies.


    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    Online safety: How to keep children safe on Facebook, instant messaging apps and other online dangers

    Contents

    • Set some rules
    • The dangers
    • How to make YouTube and Facebook safer for kids
    • Microsoft Family Security in Windows 10
    • Parents who share too much
    • Ways to make the internet safe
    • User settings
    • Parental controls on an iPhone and iPad
    • Conclusion

    Much of the internet is a fabulous resource for kids, whether it's Wikipedia for helping with homework, online games, social networks, videos, music and more. However, there are an equal number of websites that you wouldn’t want them going anywhere near.

    One of the greatest challenges facing parents these days is how to ensure that their children remain safe online. With so many kids now having tablets, smartphones, or PCs of their own, it’s increasingly difficult to know what content they access and who they’re meeting on the web. See also: Best parental control software 2017

    A recent study by the Oxford Internet Institute (OII) at Oxford University revealed that of 515 interviewed 12- to 15-year-old children, 14 percent had had a 'negative' online experience in the past year, 8 percent had been contacted by strangers, almost 4 percent had seen someone pretend to be them online, 2 percent has seen sexual content that made them feel uncomfortable, and three percent had seen something that scared them.

    A huge majority (90 percent) of the children's parents either did not know what parental filters were or they were not using them, and the children of those who were using them were at risk of viewing the wrong sort of information - the filters could be returning damaging false-positives that could make them more vulnerable or ill-informed than before they read the information. 

    The OII suggests that rather than parental filters, which it says should be turned off as early as possible, we need to properly educate children. Future research into keeping kids safe online should "look carefully at the long-term value of filters and see whether they protect young people at a wider range of ages".

    At the end of the day, whether you choose to go down the route of parental controls or better education without the rose-tinted glasses is really up to you.

    In this article we’ll explain what are the dangers and point out ways you can protect your kids from them. Much of our advice is common sense, but in addition there are some settings you can make to limit the content and apps available on a phone, tablet or PC. Also see: How much screen time is healthy for kids?

    How to keep kids safe online: Set some rules

    Kids these days are digital natives. They've grown up with the internet and have no concept of what life was like without it. They’re completely at home with technology: using a mouse or touchscreen to navigate is as much a life skill as learning to read and write.

    In fact, children tend to learn to use a touchscreen way before they can read or write, using colours, images and symbols instead of words to navigate around apps and websites in order to get to a video or game they like.

    Whatever the age of your kids, it’s important to keep them safe when browsing websites, using social networking services such as Facebook, and chatting with friends using instant messaging programs.

    Although your children may know more about using a laptop, tablet and the internet than you do, it’s your responsibility to ensure they're protected from the parts of the web that present a danger to them.

    The dangers (see below) may sound bad, but the good news is that you can prevent most of them happening without too much time, effort or money.

    Common sense plays a bigger part than you might think. For a start, we’d recommend not allowing children to use a device - laptop, tablet or phone - in their own room. Asking them to use it in a communal area should discourage most inappropriate activities as it will be obvious what they’re up to even if you only glance in their direction.

    The most important thing to do is to talk to each child and explain (in a way appropriate to their age) the dangers that the internet could pose to them, and why they can’t use their devices in their room.

    Also, encourage them to tell you whenever they see anything that makes them uncomfortable or upsets them, or simply isn’t what they expected. You can delete inappropriate websites from your browser's history, and add the site's address to a parental control filter list (we'll come to this in a minute).

    Also encourage them to tell you if they receive any threatening or frightening messages or emails - you can add the sender's address to most email programs' blocked list.

    You should also make it plain what is acceptable and what isn’t acceptable online. That’s something only you can decide, but you can’t expect your kids to know they’re doing something wrong if you haven’t set any boundaries.

    You might, for example, tell your child that they're not allowed to download apps or files without your permission first, nor share a file with anyone without your consent. You could also set rules about whether they can use any instant messaging services, tell them not to reply to unsolicited emails or sign up for free accounts without you first checking that it's ok.

    How to keep kids safe online: The dangers

    Online gaming risks

    While much of the media focus tends to revolve around the problems children can encounter on social media sites such as Snapchat, Facebook, and Instagram (all of which require account holders to be at least thirteen years old) recent research from security experts Kaspersky labs has found that online gaming is now a real source of concern.

    In a study of 11-16 yr olds, Kaspersky discovered that 38 percent of children had encountered people pretending to be someone else on gaming platforms, while 23 percent had been asked personal or suspicious personal questions while online.

    Perhaps the most worrying statistic though was that 20 percent of the children interviewed said that they trusted the gaming platform so much that they would see no problem meeting contacts from it in real life. This is compounded by the fact that nearly a third of the children in the study said that their parents had no idea who they talked to when they played games online.

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS

    What is Bixby? Samsung announces a new digital assistant for the Galaxy S8 smartphone

    The Samsung Galaxy S8 is sure to arrive with many new features and upgrades, but perhaps one of the most significant announced so far is Bixby, a new voice control interface and assistant. Samsung has high hopes for the successor to the often underwhelming S-Voice, but what does this newcomer bring to the table? We take a look at what we know about Bixby so far, and whether Apple’s Siri, Microsoft’s Cortana, and Google’s Assistant should be worried.

    See also: How to use Google Assistant and Google Now, How to use Cortana on Android, Samsung Galaxy S8 latest rumours - release date, UK price, and features

    What is Bixby and what does it do?

    Bixby is a brand new voice assistant which will debut on the Samsung Galaxy S8. The system has been developed by the Korean tech giant to have deep integration with its phones, essentially offering a replacement for the touch interface in many instances.

    Now, of course, that’s exactly what Siri, Cortana, Google Assistant, and even the Amazon Echo seem to offer on their respective platforms. So what makes Bixby special?

    Wider level of control

    In its approach to designing the new assistant, Samsung has taken the line that rather than making a cute, witty companion – which seems to be the flavour offered by Cortana and Siri – that proffers casual information such as the weather or the last five films Bill Murray starred in, it will instead be a replacement for, and compliment to, the touch interface on the device itself. This is achieved by making the software burrow down to deeper levels of functionality within the operating system and apps.

    ‘When an application becomes Bixby-enabled,’ states Injong Rhee, Samsung’s Head of R&D, Software & Services. ‘Bixby will be able to support almost every task that the application is capable of performing using the conventional interface (ie. touch commands).  Most existing agents currently support only a few selected tasks for an application and therefore confuse users about what works or what doesn’t work by voice command.’

    Contextual awareness

    Paired with this comprehensive control ability comes another important factor, one which Samsung is calling Contextual Awareness. From the information provided so far this seems to suggest that the touch and voice interface will be completely interchangeable. So, for example, if you’re looking at a photograph in your Gallery of a great night out with your partner, you should be able to select the image with a touch then tell Bixby to email it to them. The assistant should be smart enough to realise what you’re looking at and what application to use to complete the task.

    Flexible commands

    Bixby is also built to be flexible with the commands you use, so there would be no definitive ‘do this precise thing’ that you’d need to remember in order to accomplish certain tasks. The system should be able to recognise your intentions from keywords in your commands and get things done without annoying you.

    The Samsung Galaxy S8 will have a dedicated Bixby button so you can quickly launch the feature, and the assistant will gradually be added to many other products across its range, including TVs, air-conditioning units, and basically anything that contains a microphone and internet connectivity.

    What is Bixby?

    Is Bixby better than Siri, Cortana, and Google Assistant?

    Well, if it behaves with the intelligence and competence that Samsung suggests then it will be a very impressive addition to the Galaxy S8. The thing is, we’ve seen and heard plenty of claims in the past from companies boasting a revolutionary breakthrough that will change human existence forever. Then when it arrives it’s a bog-standard product with only a new colour scheme to differentiate it from its rivals.

    Bixby could be an innovative way to streamline interactions with our devices, but at launch the software will only be available on the Samsung Galaxy S8, and even then only with a handful of ‘Bixby-enabled’ applications. We’re betting that these apps will be the bespoke offerings that usually come preinstalled on Samsung devices, and this causes a big problem.

    With Samsung targeting inconsistencies in the way other voice assistants work, as Injong Rhee stated above, the fact that Bixby will only be available on certain apps at launch means that there will be immediate confusion over when and where the assistant will function. This could be solved in time as developers code-in support for Bixby, but so far Samsung has only stated that it hopes to eventually release an SDK (Software Development Kit), meaning Bixby enabled apps are likely to be in the minority for a while yet.

    So it's very much a wait an see kind of deal. But we have to say that the idea of Bixby, backed with a determined and focussed Samsung, could be a very interesting product.

    When can I get Bixby?

    The Samsung Galaxy S8 is due to be released on the 29th March, which is very soon. As Bixby is a central feature in the new device we won’t have to wait long then until we’re able to see how it works in real life. At the moment there are no details on whether the feature will be available to older devices as an app, but as more information becomes available we’ll update this feature to let you know. So be sure to keep checking back.

    As for the Galaxy S8, you can't get it on contract yet (or even pre-order it on contract). If you want to find out about the latest and best Galaxy S8 contract deals, register below and we'll drop you an email as soon as they're announced.

    (Your email will only be used to contact you about S8 deals. No spam or nonsense, we promise. We also need to know what country you're in so we send you the right deals for where you are.)

    • Digg
    • Del.icio.us
    • StumbleUpon
    • Reddit
    • RSS