01 Apr

Case Study: On-Demand Infrastructure on AWS Helps Capital One DevOps Teams Move Faster Than Ever

On-Demand Infrastructure on AWS Helps Capital One DevOps Teams Move Faster Than Ever

By using AWS, we’ve cut the time needed to build new application infrastructure by more than 99 percent. With the virtually instantaneous infrastructure available on AWS, our DevOps teams have the building blocks they need to start developing any new product as soon as they understand the intent behind it.
John AndrukonisChief Architect, Capital One

Capital One is well known for its early adoption of new technologies to help it transform the banking customer experience. Less obvious, but no less crucial, are the practices and mindsets that position the company to make such effective use of those new technologies—practices and mindsets that are the result of the company’s conscious self-transformation into a digital technology company. The company’s recent embrace of DevOps is just the latest step.

"We realized about a decade ago that, to continue to be a great bank, we needed to reinvent ourselves as a digital technology company," says George Brady, executive vice president and chief technology officer at Capital One. "To be a great technology company, we were going to need to build and architect our own systems and set up a developer culture that would help us attract and retain the most talented people."

DevOps is the latest step in further strengthening the company’s developer culture, the foundation of which was laid in 2010 with the company’s shift from waterfall to agile software development. DevOps, which uses automation, monitoring, and continuous integration of new code to achieve faster development cycles and more frequent, more reliable releases, is a natural fit for a company that wants to be as responsive to customer feedback as possible.

"Our product managers obsess over customer feedback and embrace moving customers’ ideas into products to make their banking and financial services experiences top-notch," says John Andrukonis, the chief architect at Capital One. That’s why the company has a cloud-first policy, under which all new applications are architected for and deployed in the cloud and is steadily increasing its use of microservices and open, integrated architectures.

“Our technology strategy is enabling more and more integration of our systems, which increases our ability to collect and get insights from customer feedback,” says Andrukonis. “But insights are only as valuable as our ability to act quickly on them, and that’s what DevOps helps us do.”

Responding to customer insights is even faster thanks to AWS services such as Amazon Virtual Private Cloud (Amazon VPC), Amazon Elastic Compute Cloud (Amazon EC2), Amazon Simple Storage Service (Amazon S3), and Amazon Relational Database Service (Amazon RDS).

“By using AWS, we’ve cut the time needed to build new application infrastructure by more than 99 percent,” says Andrukonis. "With the virtually instantaneous infrastructure available on AWS, our DevOps teams have the building blocks they need to start developing any new product as soon as they understand the intent behind it."

The company’s embrace of DevOps has also helped Capital One cultivate an even more collaborative culture.

"It used to be that developers’ involvement with products mostly ended after delivery to operations," says Andrukonis. "Now that we’re using DevOps, our developers feel even more ownership of these products and are empowered to get proactive about uptime, supportability, and monitoring. DevOps on the cloud is helping designers, developers, and engineers work together to make the customer experience better and better."

Technical staff aren’t the only Capital One employees who are collaborating more. "A DevOps culture has helped our business product managers feel even more engaged in our technology journey than in the past," says Andrukonis. "Product owners get very excited when we tell them that, because of the much shorter development time on AWS, we can change customer feedback into new features and products in just a few weeks."

Brady says that a DevOps culture also helps the company ensure it is fielding the best team members it can. "Setting up that strong developer culture is important for attracting and retaining talented people. Moving to DevOps on the cloud is just another way that we can cultivate and support independent, autonomous teams that feel empowered to do their best work every day."

01 Apr

Case Study: Capital One on AWS

Capital One is a leading information-based technology company that is on a mission to help its customers succeed by bringing ingenuity, simplicity, and humanity to banking.

CASE STUDY

How Capital One Reduced its Data-Center Footprint, Expanded its Use of Microservices, and Reimagined Banking Using AWS

How did Capital One get to the point where, in 2015, it announced that all new company applications would run in—and all existing applications would be systematically rearchitected for—the cloud? Although Capital One, a technology company that offers financial services, is different in important ways from other companies in its industry, its path to the Amazon Web Services (AWS) Cloud and its cloud-first approach to software development offers useful tips for large, non-cloud-native, highly-regulated enterprises mapping out their own cloud journeys.

Read more

CASE STUDY

On-Demand Infrastructure on AWS Helps Capital One DevOps Teams Move Faster Than Ever

Capital One is well known for its early adoption of new technologies to help it transform the banking customer experience. Less obvious, but no less crucial, are the practices and mindsets that position the company to make such effective use of those new technologies—practices and mindsets that are the result of the company’s conscious self-transformation into a digital technology company. The company’s recent embrace of DevOps is just the latest step.

Read more

CASE STUDY

Capital One Contact Centers Innovate Faster Using Amazon Connect

To best understand Capital One and its long-term strategy, it helps to think of the company not as a bank—despite the fact that the diversified financial services company is, in fact, one of the ten largest U.S. banks by assets and deposits—but as a digital technology company that offers banking services. At Capital One, the central motivating belief is that the winners in next-generation banking will be the companies that make the most creative and innovative use of technology to provide seamless, intelligent, truly excellent customer experiences.

Read more
Capital one approved image

"Each call is a chance to live our mission of bringing simplicity, ingenuity, and humanity to banking. That mission is powered by our strategic use of technology. Our focus is on serving customers the way they want to be served. We are constantly researching new technologies, but we know our voice channel remains crucial for many customers and situations."

Rajiv Sondhi

Vice President, Software Engineering, Capital One

01 Apr

Case Study: Brazilian real estate company rebuilds for greater security and ease with SAP on Microsoft Azure

March 26, 2019

Tegra Incorporadora is a homebuilder and residential real estate company headquartered in Rio de Janeiro, Brazil, and a subsidiary of Canadian company Brookfield Asset Management. When Tegra evaluated the expense and drawbacks of its infrastructure, especially in light of its critical SAP systems, one solution for addressing its issues stood out: move to Microsoft Azure. Not only did the company gain the cost-saving flexibility and scalability it needed to prosper, but it also uses Microsoft services for easier system management and gathering and sharing valuable insights into its business.

These days, we need to be much more agile and have innovative solutions. We’ve achieved this with our shift to Azure. It brings us flexibility, economy, innovation, and a great deal of agility.

Dado Amaral: IT General Manager

Tegra Incorporadora

Tegra Incorporadora has been building, buying, and selling homes in Brazil for 40 years. The company has more than 78 million square feet built or under construction in more than 93,000 properties. Tegra relies on a set of critical systems, such as SAP, that it cannot afford to have offline. “These systems perform all our administrative and management tasks,” says Dado Amaral, IT General Manager at Tegra Incorporadora. “We keep all the purchase orders, requests, construction plans, and client contacts and relationships in one system or another.”

The company used a hosting service to house all its systems, but that solution was failing, according to Maurício Baise, IT Infrastructure Manager at Tegra Incorporadora. “The old datacenter relied on virtual machines prone to running out of memory,” he says. The setup was cumbersome, demanding two to three weeks of preplanning for any changes, and extremely expensive. And once deployed, storage could not be rolled back, regardless of usage.

Creating a scalable infrastructure

Tegra moved to Microsoft Azure and revolutionized its IT structure, maximizing scalability and flexibility—at a fraction of the cost of the former system. Baise particularly values the convenience of managing every aspect of the environment in the Azure cloud portal. “From the portal, I can see how much memory and disk each machine is consuming, so I can develop a capacity plan to increase or decrease storage. When I consider how easy it is to turn off a machine and no longer be charged, or to upload a machine and use it for a few hours, the Azure subscription cost is minimal,” he says.

Amaral underscores the advantages of scalability and flexibility—essential in today’s business world. “These days, we need to be much more agile and have innovative solutions. We’ve achieved this with our shift to Azure. It brings us flexibility, economy, innovation, and a great deal of agility.”

Concerned about migrating its SAP systems to the cloud, Tegra engaged specialist Basis Solutions, a Microsoft Partner Network member. “SAP conversion to Azure has been in high demand by clients recently,” says Basis consultant Flávio Batista. “With Azure, the machines are frequently updated. When we undergo a migration process, we see a performance gain due to the advance in technology and elastic infrastructure.”

BHS Axter, another Tegra partner and member of the Microsoft Partner Network, oversaw infrastructure and connectivity, ensuring that simultaneous operation would continue throughout the migration. The partner used Azure ExpressRoute for uninterrupted communication and to prevent issues with availability or latency. With ExpressRoute, private connections transmit data directly between systems rather than via the internet. The data was transmitted to Microsoft SQL Server while the application systems were migrated to run on Windows Server in Azure.

Keeping the benefits coming

With its migration to a 100 percent Microsoft environment, Tegra shifted to a highly secure, agile environment with numerous tools to accelerate its effectiveness, like using Azure Backup to reduce data restoration time and reliability challenges. “Besides the resources native to Azure Backup, we created a more tailored and flexible solution for the database,” says BHS Axter Manager Luciano Bernardes. “Tegra has elevated its operations with this solution.”

Tegra embraced additional tools such as Microsoft Power BI, which further stimulated the dissemination of strategic information within the company, including between its São Paulo head office and headquarters in Canada. Tegra Systems Manager Alexandro Coelho found valuable insights—and efficiencies. “With Power BI, we carried out a project with our Canadian holding company and everyone had access to key information in real time,” he says.

Coelho also appreciated that the company’s move to the cloud went unnoticed. “The best news we received about the migration to Azure was—no news,” he continues. “No user complaints. No one noticed when we restructured machines, reducing their number. It was a smooth migration with no impact to our end customers.”

Adds Bira Freitas, Chief Executive Officer at Tegra Incorporadora, “Surrounding ourselves with partners at the level and standard of Microsoft is not only a privilege for us, but also something we focus on and want to do. Our company needs to act in the short term and plan for the future. We have to work with people who also plan 10, 15, 20, or 50 years ahead.”

Find out more about Tegra Incorporadora on Twitter, Facebook, and LinkedIn.

The best news we received about the migration to Azure was—no news. No user complaints. No one noticed when we restructured machines, reducing their number. It was a smooth migration with no impact to our end customers.

Alexandro Coelho: Systems Manager

Tegra Incorporadora

Show quote 1Show quote 2

Learn More

Basis Solutions
BHS Axter

01 Apr

Case Study: Packing with Mixed Reality: KLM uses Microsoft HoloLens to redefine its cargo training experience with mixed reality

Packing with Mixed Reality: KLM uses Microsoft HoloLens to redefine its cargo training experience with mixed reality

March 26, 2019

KLM is helping to reduce staff turnover and improve efficiency by training employees packing freight crates with Microsoft HoloLens. Staff are able to visualize errors and learn on the job even with security restrictions meaning they cannot access, and therefore see, the crates they pack when they are placed on the aircraft.

How do you safely transport cargo by air? Training new cargo staff to pack goods quickly while achieving maximum productivity and minimum damage to goods is an everyday challenge for Dutch airline KLM. Now, using Microsoft HoloLens to innovate its basic training program, KLM has reduced the training time and improved the impact of the training.

KLM Royal Dutch Airlines is the oldest airline in the world to be operating under its original name. The company, which is now part of the AirFrance–KLM group, operates flights worldwide, carrying 32.7 million passengers and 623,000 tons of cargo every year on its fleet of more than 200 aircraft.

Since its merger with Air France in 2004, KLM’s Cargo business has grown in capacity, the number of freight destinations it serves and the broad range of cargo types it carries. To offer the best service to its customers, KLM Cargo is using HoloLens to increase workers’ skills and create a more efficient cargo process.

Blazing a trail in training innovation

“In the Netherlands, there is a shortage of low-skilled staff,” says Edwin Bleumink, Innovation Manager for Learning Technology at KLM. “Workers can find a job anywhere, which means they often come and go quickly. We usually have a new group of cargo staff starting every week that needs to be trained.”

The conventional training method consisted of one day of Microsoft PowerPoint presentations followed by a three-day practical training. But employees lacked motivation and often failed to retain critical information.

Moreover, existing staff paid little attention to managing the complete cargo process, and new hires struggled with a lack of visibility into the supply chain.

To overcome these challenges, KLM realized it needed to radically innovate its basic training to onboard new cargo operatives quickly but with more impact than before, so that staff could do their job to a much higher standard.

Henny van Kessel, owner of KLM partner, HVK Learning, says: “We essentially had two goals in redesigning the basic training program. One was to improve operational efficiency of the cargo packing process to deliver the best possible customer service, and the other was to give new workers an engaging learning experience that would motivate them and greatly improve their productivity.”

With that in mind, what KLM Cargo has developed using HoloLens is a mixed reality training program that is inspiring, absorbing, and impactful. The team has increased training efficiency by 25 percent through reducing the program by a day yet improving retained knowledge by 30 percent. They have seen a big upturn in workers’ awareness of the importance of their work and seen much improved teamwork despite shift patterns. Workers are motivated to perform better and are much more engaged in their work.

Gert Mijnders, KLM Cargo Manager Compliance Knowledge Center, says: “We’re very pleased with the HoloLens training because we can now simulate the entire cargo packing process, which is something we haven’t been able to do before. We also see that trained staff retain knowledge far better, which has undoubtedly led to productivity gains.”

Rethinking educational concepts with visual learning

KLM Cargo began its journey into mixed reality following an experimental collaboration with KLM’s technical division, Engineering & Maintenance (E&M). Van Kessel initiated a partnership between KLM and the Dutch Aerospace Centre to look at ways of innovating E&M training programs to have more impact.

“The Dutch Aerospace Centre was one of the first organizations in the Netherlands to have the HoloLens headset,” says Van Kessel. “We decided to perform a proof of concept to explore how the technology could redefine technical training. We wanted to see whether HoloLens could improve engineers’ knowledge and understanding of a complex system, such as aircraft air conditioning.”

Edwin Bleumink adds: “Taking what we learned from the E&M experiment, we decided to build a simulation of the entire Cargo packing process so that trainees could see, through HoloLens, how building pallets contributes to the process. And if you make a mistake, you can try again—without any consequences. A very common mistake in pallet packing is that cargo workers don’t leave enough space free for locks to be attached, so the pallets can’t be secured on the plane. This surprised many of the HoloLens trainees because, until the training module rejected their pallet, they were convinced it was ready for loading and that they’d done a good job.”

Bleumink continues: “We work in an industry where capital expenditure is enormous. We can’t afford to ground an aircraft to use for training, and physical simulations would be very costly. With HoloLens, we’re able to give trainees a real-life experience without disrupting our fast-moving, time-critical cargo business.”

An impactful learning experience for new workers

As a result of introducing HoloLens to basic cargo training, KLM has seen a number of marked improvements that validate the decision to innovate. Workers are confident and assured in what they do; they understand why it’s so important to pack pallets the right way.

Trainees report that they value being able to ‘learn by doing’ in a safe environment where there is room for mistakes, and floor managers see a noticeable improvement in applying knowledge from training to the job.

The new educational concept with the use of HoloLens has led to a shared learning experience that is far more impactful than conventional training and has redefined the role of the trainers, who have become facilitators during the learning process, rather than instructors transferring knowledge in a static way.

Van Kessel says: “We know from research that with lecture-based training, after 20 minutes of listening you lose attention. But with HoloLens, we saw trainees work on an assignment for 45 minutes without losing focus. When you’re wearing HoloLens, you’re in the moment and your brain can’t think of anything else. That’s the real value of HoloLens when it comes to training.”

Van Kessel continues: “We tested the impact of the new training. The experimental group was trained with HoloLens and the control group was trained in the conventional way using slides. What we found was that the HoloLens group could remember every part and those trained conventionally couldn’t.”

Cargo worker Montero says: “It’s much better because you’re not just learning, you get to work straight away. You actually do the real work but using the headset in a normal space instead of on the work floor where mistakes matter. So, you already have a bit of practical experience when you start the job.”

Now, with its mixed reality training experience, KLM Cargo is in a position to tackle the question of transporting dangerous goods in a way that almost guarantees safety in transit. “You can imagine with dangerous goods, with oil and chemicals, that you could simulate in HoloLens how to pack these goods well,” says Bleumink. “We’d be able to include different training modules depending on the type of goods. With chemicals, for example, there are certain types that cannot be stacked. So, this could be a very interesting next step for Cargo.”

KLM wants to be the most digitized airline in Europe and what the company has seen in the HoloLens cargo project aligns closely with its higher business goals. Becoming digital requires business transformation, reinventing operating models, and doing things differently. The HoloLens project contributes to all of these.

Bleumink concludes: “Although the new basic training is a standalone solution, in the future if we want to expand this approach to other parts of the business, we’ll need to use data. And if we do that, we’ll need to connect it to other processes. Then, who knows what possibilities that might bring.”

Bleumink and Van Kessel are convinced that HoloLens will play an important role in future learning. They combined their knowledge and experiences and started LearningLinkers. They specialize in double-blended learning programs, increasing the impact of learning by connecting business and human with smart use of technology.

If you are interested in their educational concepts using HoloLens, please contact them via www.learninglinkers.com.

We essentially had two goals in redesigning the basic training program. One was to improve operational efficiency of the cargo packing process to deliver the best possible customer service, and the other was to give new workers an engaging learning experience that would motivate them and greatly improve their productivity.

Henny van Kessel: Owner

01 Apr

Case Study: WhiteSource simplifies deployments using Azure Kubernetes Service

WhiteSource simplifies deployments using Azure Kubernetes Service

March 28, 2019

WhiteSource simplifies open-source usage management for security and compliance professionals worldwide. Now the WhiteSource solution can meet the needs of even more companies, thanks to a re-engineering effort that incorporated Azure Kubernetes Service (AKS).

WhiteSource was created by software developers on a mission to make it easier to consume open-source code. Founded in 2008, the company is headquartered in Israel, with offices in Boston and New York City. Today, WhiteSource serves customers around the world, including Fortune 100 companies. As much as 60 to 70 percent of the modern codebase includes open-source components. WhiteSource simplifies the process of consuming these components and helps to minimize the cost and effort of securing and managing them so that developers can freely and fearlessly use open-source code.

WhiteSource is a user-friendly, cloud-based, open-source management solution that automates the process for monitoring and documenting open-source dependencies. The WhiteSource platform continuously detects all the open-source components used in a customer’s software using a patent-pending Contextual Pattern Matching (CPM) Engine that supports more than 200 programming languages. It then compares these components against the extensive WhiteSource database. Unparalleled in its coverage and accuracy, this database is built by collecting up-to-date information about open-source components from numerous sources, including various vulnerability feeds and hundreds of community and developer resources. New sources are added on a daily basis and are validated and ranked by credibility.

You can read more about WhiteSource in this Azure customer story.

Simplifying deployments, monitoring, availability, and scalability

WhiteSource was looking for a way to deliver new services faster to provide more value for its customers. The solution required more agility and the ability to quickly and dynamically scale up and down, while maintaining the lowest costs possible.

Because WhiteSource is a security DevOps–oriented company, its solution required the ability to deploy fast and to roll back even faster. Focusing on an immutable approach, WhiteSource was looking for a built-in solution to refresh the environment upon deployment or restart, keeping no data on the app nodes.

This was the impetus to investigate containers. Containers make it possible to run multiple instances of an application on a single instance of an operating system, thereby using resources more efficiently. Containers also enable continuous deployment (CD), because an application can be developed on a desktop, tested in a virtual machine (VM), and then deployed for production in the cloud.

Finding the right container solution

The WhiteSource development team explored many vendors and technologies in its quest to find the right container orchestrator. The team knew that it wanted to use Kubernetes because it was the best established container solution in the open-source community. The WhiteSource team was already using other managed alternatives, but it hoped to find an even better way to manage the building process of Kubernetes clusters in the cloud. The solution needed to quickly scale per work queue and to keep the application environment clean post-execution. However, the Kubernetes management solutions that the team tried were too cumbersome to deploy, maintain, and get proper support for.

Fortunately, WhiteSource has a long-standing relationship with Microsoft. A few years ago, the team responsible for Microsoft Visual Studio Team Services (now Azure DevOps) reached out to WhiteSource after hearing customers request a better way to manage the open-source components in their software. Now WhiteSource Bolt is available as a free extension to Azure DevOps in the Azure Marketplace. In addition, Microsoft signed a global agreement with WhiteSource to use the WhiteSource solution to track open-source components in Microsoft software and in the open-source projects that Microsoft supports.

A Microsoft solution specialist demonstrated Azure Kubernetes Service to the WhiteSource development team, and the team knew immediately that it had found the right easy-to-use solution. AKS manages a hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications—without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking the application offline. AKS also supported a cloud-agnostic solution that could run the WhiteSource application on multiple clouds.

Architecture

The WhiteSource solution was redesigned as a multicontainer application. The developmental language was mostly Java that runs under a WildFly (JBoss) application server. The application is deployed to an AKS cluster that pulls images from Azure Container Registry and runs in 60 to 70 Kubernetes pods.

The main WhiteSource app runs on Azure Virtual Machines and is exposed from behind a web application firewall (WAF) in Azure Application Gateway. The WhiteSource developers were early adopters of Application Gateway, which provides an application delivery controller (ADC) as a service with Layer 7 load-balancing capabilities—in effect, handling the solution’s front end.

The services that run on AKS communicate with the front end through Application Gateway to get the requests from clients, process them, and then return the answers to the application servers. When the WhiteSource application starts, it samples an Azure Database for MySQL database and looks for an open-source component to process. After finding the data, it starts processing, sends the results to the database, and expires. The process running in the container can be scrubbed entirely from the environment. The container starts fresh, and no data is saved. Then it starts all over.

Containers also make it easy to continuously build and deploy applications. The containerized workflow is integrated into the WhiteSource continuous integration (CI) and continuous deployment in Jenkins. The developers update the application by pushing commits to GitHub. Jenkins automatically runs a new container build, pushes container images to Azure Container Registry, and then runs the app in AKS. By setting up a continuous build to produce the WhiteSource container images and orchestration, the team has increased the speed and reliability of its deployments. In addition, the new CI/CD pipeline serves environments hosted on multiple clouds.

We write our AKS manifests and implement CI/CD so we can build it once and deploy it on multiple clouds. That is the coolest thing!

Uzi Yassef: senior DevOps engineer

WhiteSource

Azure services in the WhiteSource solution

The WhiteSource solution is built on a stack of Azure services that includes the following primary components, in addition to AKS:

  • Azure Virtual Machine scale sets are used to run the AKS containers. They make it easy to create and manage a group of identical, load-balanced, and autoscaling VMs and are designed to support scale-out workloads, like the WhiteSource container orchestration based on AKS.
  • Application Gateway is the web traffic load balancer that manages traffic to the WhiteSource application. A favorite feature is connection draining, which enables the developers to change members within a back-end pool without disruption to the service. Existing connections to the WhiteSource application continue to be sent to their previous destinations until either the connections are closed or a configurable timeout expires.
  • Azure Database for MySQL is a relational database service based on the open-source MySQL Server engine that stores information about a customer’s detected open-source components.
  • Azure Blob storage is optimized for storing massive amounts of unstructured data. The WhiteSource application uses blob storage to serve reports directly to a customer’s browser.
  • Azure Queue storage is used to store large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. The WhiteSource solution uses queues to create a backlog of work to process asynchronously.

Using AKS, we get all of the advantages of Kubernetes as a service without the overhead of building and maintaining our own managed cluster. And I get my support in one place for everything. As a Microsoft customer, for me, that’s very important.

Uzi Yassef: senior DevOps engineer

WhiteSource

Benefits of AKS

The WhiteSource developers couldn’t help comparing AKS to their experience with Amazon Elastic Container Service for Kubernetes (Amazon EKS). They felt that the learning curve for AKS was considerably shorter. Using the AKS documentation, walk-throughs, and example scenarios, they ramped up quickly and created two clusters in less time than it took to get started with EKS. The integration with other Azure components provided a great operational and development experience, too.

Other benefits included:

  • Automated scaling. With AKS, WhiteSource can scale its container usage according to demand. The workloads can change dynamically, enabling some background processes to run when the cluster is idle, and then return to running the customer-facing services when needed. In addition, more instances can run for a much lower cost than they could with the previous methods the company used.
  • Faster updates. Security is a top priority for WhiteSource. The company needs to update its databases of open-source components as quickly as possible with the latest information. It was accustomed to a more manual deployment process, so the ease of AKS surprised them. Its integration with Azure DevOps and its CD pipeline makes it simple to push updates as often as needed.
  • Safer deployments. The WhiteSource deployment pipeline includes rolling updates. An update can be deployed with zero downtime by incrementally updating pod instances with new ones. Even if an update includes an error and the application crashes, it doesn’t terminate the other pods and performance is not affected.
  • Control over configurations. The entire WhiteSource environment is set up to use a Kubernetes manifest file that defines the desired state for the cluster and the container images to run. The manifest file makes it easy to configure and create all the objects needed to run the application on Azure.

Summary

The developers at WhiteSource understand that every improvement they make to the infrastructure of their solution helps enhance the product offering. They are moving forward with containers and beginning to containerize other utilities and small tool sets. In addition, all new development is now being done using AKS.

The simplicity of the AKS deployments has more than made up for any inconvenience related to the move to containers. The entire managed Kubernetes experience—from the first demo of AKS to the most recent application deployment—far exceeded the company’s expectations.

In the past, we were setting up our own Kubernetes cluster, maintaining it, and updating it. It was cumbersome and it took time and specialized knowledge. Now we can just focus on building, deploying, and maintaining our application. AKS shortens the time and helps us to focus more on innovation.

Uzi Yassef: senior DevOps engineer

WhiteSource

01 Apr

Case Study: Cross-group data marts open new frontiers for Hungarian bank

Cross-group data marts open new frontiers for Hungarian bank

March 28, 2019

Part of the OTP Group, OTP Bank is the leading retail bank in Hungary, growing rapidly across Central and Eastern Europe through targeted acquisitions. To gain deeper visibility into its new, vibrant business activities, the company has built internal data marts based on Microsoft SQL Server and SharePoint technology. Now analysts and managers can make educated decisions through truly data-driven insights.

Familiar interface, real data power

Headquartered in Budapest, OTP Group is a fast-growing bank with a strong retail focus. One of the most successful and diversified financial services players in Hungary, and in Central and Eastern Europe, OTP Group has 50 businesses, ranging from retail banking to savings and pensions.

The firm is always looking to acquire appropriate local businesses to further its customer-facing financial solutions. However, its rapid growth has led to some challenges in 2018. According to Márton Horváth, head of the department in OTP Group’s Controlling Service Center, “We had some silos of data because of these acquisitions that we wanted to better unify. OTP Group managers wanted to foster greater internal collaboration and find a solution to offer ways for its analysts to get deeper insight into our subsidiaries‘ activities.”

After a thorough market evaluation, Horváth and his team of analysts decided the ease of use and flexibility of Microsoft technology would achieve these goals. “We did not have a group-level data mart in place, though we had several local ones,” he says. “We chose the Microsoft solution because of its compelling pricing, user-friendly interfaces such as Excel, and its easy-to-learn development tools. With such a flexible interface backed by some really powerful data capability, SQL Server promised to deliver results for us quickly.”

Powerful tool

Since 2016, OTP Group has worked in SharePoint to build a brand-new data utility accessible to 2,500 colleagues across the group. “Users can now build an Excel front end very quickly to a powerful back end that offers a real business intelligence universe of functionality, so they can create exactly the OLAP cube they need to analyze their data,” notes Horváth.

In everyday terms, that translates into multiple business benefits, from over 50 percent savings on licensing and supporting costs to consolidation of reports and better decision making.

Extending the system

According to Horváth, the next step is to offer even more “self-service” data analysis power, with possible use of Power BI and Office 365 entering the mix.

“Use of SQL and SharePoint Server in OTP Group has increased access to important data, leading to better internal coordination inside the group and value-generating insights,” adds Horváth. “It is also radically reducing the time to produce key reports from two weeks to two days.”

Use of SQL and SharePoint Server has increased access to important data, leading to better internal coordination inside the group which is resulting in value-generating insights.

2018: Head of Department, Controlling Service Center

OTP Group

01 Apr

Case Study: Canvass Analytics transforms large-scale operations using industrial AI supported by Azure

Canvass Analytics transforms large-scale operations using industrial AI supported by Azure

March 29, 2019

Canvass Analytics is a leader in AI-powered predictive analytics. It supports large-scale industrial operations by automating complex production processes. The team knows first-hand that AI is an innovation enabler that helps machines, devices, and people to work smarter. By partnering with Microsoft and building on Azure cloud services, Canvass has scaled its advanced analytics platform, helping customers drive new efficiencies and maximize productivity.

Artificial intelligence and training machine learning algorithms take a lot of compute power. So we needed a cloud provider that could provide us with those "burstable" types of services.

Steve Kludt: Chief Data Officer

Canvass Analytics

01 Apr

Case Study: Healthcare technology firm supports smart, scalable, cost-effective medical education with Azure

Healthcare technology firm supports smart, scalable, cost-effective medical education with Azure

March 29, 2019

Education Management Solutions (EMS) is a healthcare education technology pioneer that enables medical schools and students to simulate surgery and other clinical conditions with incredible accuracy. Students find these simulations invaluable, but setting up an on-premises simulation environment is both costly and complex. EMS knew that the answer lay in the cloud, so it decided to support its SIMULATIONiQ platform with Microsoft Azure. Now EMS offers a flexible and accessible simulation service, where institutions use live video and other online resources to provide more students with deep, consistent, highly valuable training experiences.

We use Azure to make everything easier for us and our customers. It’s easier to deploy and scale our platform and easier than ever for institutions to access powerful healthcare education resources.

Alok Saxena: Chief Technology Officer

Education Management Solutions

Medical education that comes to life

A young surgical student leans over an anesthetized patient on an operating table, about to conduct a relatively simple procedure. But as the student makes her first incision, she is all too aware that a literal scalpel’s edge could be the difference between a routine operation and a life-threatening situation.

For our aspiring surgeon, the tension is real, but we can relax. Her patient is not a person; it’s an extremely sophisticated high-fidelity mannequin. The student is training in a medical simulation center, under the watchful eyes of high-resolution video recorders, streaming live so students and doctors in other places can monitor the procedure in real time. If she does make a mistake, the mannequin will react exactly like a live patient, from vital signs to tissue responses. Even if the worst happens, the student and her peers—and ultimately patients—may still benefit from analyses of the wealth of data collected during the simulation.

Though invaluable for medical education, these simulations require significant digital capacity. It’s difficult, costly, and time-consuming to build and operate a simulation center. Many institutions serve multiple campuses, so they must find ways to ensure that students and instructors can access the appropriate simulation resources.

Terabytes of data and high-resolution video—in the cloud

Education Management Solutions (EMS) is a medical education technology provider that delivers its SIMULATIONiQ clinical simulation management platform to some of the biggest medical schools and teaching hospitals in the United States. Medical educators and certification boards use SIMULATIONiQ to bring their simulation hardware, programs, people, and processes into a single, simplified view that creates efficiencies and improves clinical outcomes.

EMS wanted to help its customers store and process terabytes of data from the dozens of sensors in the high-fidelity mannequins, an array of other medical devices, and high-resolution video recorders streaming to multiple locations. So, the company decided to support SIMULATIONiQ in the cloud with Microsoft Azure.

A solution that’s easier to deploy, scale, and access

For EMS—a longstanding member of the Microsoft Partner Network—past successes with Microsoft technology and easy scalability of service made Azure the company’s clear cloud choice. “When SIMULATIONiQ customers stream video data, it can cause big ebbs and flows in bandwidth demand,” says Lynn Welch, Vice President of Business Development and Marketing at Education Management Solutions. “We like how we can use Azure to scale and expand the solution as our customers need it.”

Institutions that run SIMULATIONiQ help safeguard simulation data and other potentially sensitive healthcare information by using Azure App Service to provide automated antivirus and other security updates. Institutions, clinicians, and students can access simulation data from anywhere, so their education experiences don’t stop when they leave the simulation center.

“We use Azure to make everything easier for us and our customers,” says Alok Saxena, Chief Technology Officer at Education Management Solutions. “It’s easier to deploy and scale our platform and easier than ever for institutions to access powerful healthcare education resources.”

Connected medical education

Today, when a team conducts a simulation, students on multiple campuses can follow along live through Azure Media Services. Clinicians and medical analysts from across the institution can process the simulation data to help students better understand their performance. And the students themselves can access their own data—and libraries of simulation recordings—wherever and however they want to.

That creates less need to duplicate resources or install more hardware than necessary. And with simulation resources accessible to everyone, nobody gets left behind. Institutions that use SIMULATIONiQ can use Azure resources like Azure Data Lake to keep active simulations ready for playback while archiving other data in separate, low-cost data stores.

Better outcomes for everybody

Like healthcare in general, medical educators must always keep costs top of mind, and by supporting SIMULATIONiQ with Azure, EMS can help its customers do more—and drastically simplify their infrastructure footprint. With the flexibility in Azure, EMS is prepared to extend its platform to include other advanced technologies such as machine learning and augmented reality.

“We use Azure to help our customers expand their educational offerings in an environment where many institutions are consolidating,” says Welch.

More than a simple cloud migration, moving SIMULATIONiQ to Azure heralds a significant transformation at EMS—and better outcomes for institutions, medical students, and their future patients.

Find out more about Education Management Solutions and the SIMULATIONiQ platform on Twitter, Facebook, and LinkedIn.

We use Azure to help our customers expand their educational offerings in an environment where many institutions are consolidating.

Lynn Welch: Vice President, Business Development and Marketing

Education Management Solutions

01 Apr

Case Study: The £5 Million Idea – how an idea on Yammer helped save £5 million via Sideways6 December 10, 20 18

The £5 Million Idea – how an idea on Yammer helped save £5 million via Sideways6

December 10, 2018

Summary

Centrica is an energy and services company with over 35,000 employees. Its principal activity is the supply of electricity and gas to 8 million businesses and consumers in the United Kingdom, Ireland and North America. With the unprecedented pace of change in the energy utility sector, the need to focus on operational efficiencies and continual business improvements has increased dramatically.

Challenge

At Centrica – one of the largest multinational energy suppliers – the innovation team are challenged with finding new ways to improve processes and customer satisfaction to reduce costs and increase revenue. Centrica sought help from Sideways 6 to adopt a solution-based approach to their employee ideas programme.

Strategy

With an active Yammer network already in use, Sideways 6 were able to offer the innovation team an accessible, intuitive and social solution for their employee idea programme complete with tools for capturing, managing, filtering, reviewing and analysing ideas and communicating back to idea submitters.

To kick off the campaign, a Yammer group called ‘Make A Good Idea Count’ (MAGIC) was set up where employees were encouraged to post and discuss any ideas they have.

Given Yammer’s open and social nature, the team were able to capture more ideas and better quality ideas from employees through this approach.

Results

Three call centre employees shared an identical idea independently of each other in the Yammer group. Centrica had texted customers to advise them when the company had tried to phone them, but didn’t give the customer the option of texting back. Why not give the customer this option?

The innovation team took forward the idea and trialled it in the call centre offices in Cardiff and Mumbai.

This simple change produced an estimated business value of £5 million in cost savings through higher productivity and increased customer satisfaction.

The idea was one of hundreds that have been successfully implemented through the MAGIC campaign, demonstrating just what’s possible when employees are given a voice.

Prior to the introduction of Sideways 6, our business had not taken advantage of Yammer’s functionality and infrastructure to successfully crowdsource ideas."

– Jenny Jarvis

Employee Insight Analyst
Centrica

01 Apr

Case Study: January 24, 2019: Sogeti and Motion10 build SharePoint solution using Flow Buttons for ProRail

Sogeti and Motion10 build SharePoint solution using Flow Buttons for ProRail

Summary

ProRail is a semi-governmental organization that is responsible for the entire railway infrastructure in the Netherlands. They work 24/7 to make the rail infrastructure safer, more reliable, and more durable to get people and goods to their destination in time. The Netherlands have the busiest rail system in Europe. In 2015 3,3 million train rides were made. ProRail employs approximately 4000 people and has a E3 Office365 enterprise license for every employee.

To support such a scale, ProRail works closely with two preferred solution providers on their various needs, Sogeti and Motion10.

To more easily facilitate provisioning of team and project sites, ProRail looked to Sogeti and Motion10 for a more streamlined process.

Challenge

ProRail uses Office365 and SharePoint Online to collaborate in teams, departments and project teams. In order to support and maintain SharePoint sites they use several different site templates that are used to provision a site when a new project or team is formed.

All this work is done by the Enterprise Content Management team. To facilitate easy and rapid provisioning of team and project sites they asked to use a more streamlined process.

Strategy

ProRail defined a button flow for SharePoint administrators for the rapid site creation. ProRail currently uses Mavention Make for site creation, and the button flow uses this solution to create sites. For your own environment you can also use Office Dev PnP as a provisioning solution.

The Administrator can trigger a button flow from anywhere using his mobile app, or from the flow website.

When selecting the button, they can easily provide input parameters such as the site title, the template used and a unique tracking number for the site.

Once every input is filled, a compose action is used to create a JSON message from the inputs. When the compose-action is completed the Flow sends a JSON message to an Azure function that provisions the site. These inputs, combined with additional configuration pre-defined in the Flow, are all that’s needed to easily provision new sites.

As a next step, the team plans to leverage button sharing as well as Approvals to make this ability available for end-users. They will add an approval step to the button flow and then share that button with company employees. This allows for self-service site creation which is monitored by the ECM team. In addition, they will also make self-service creation available with other configurations for SharePoint.

Results

"Flow buttons help us to create sites quicker which makes our customer happy. Another advantage is that a non-technical person can do the job.I can’t wait to see the Flow app in the hands of the end-users. It will help them adopt SharePoint. Most people like apps, don’t they?""

– Berna Vink

ProRail

Call Now Button