This guide is for the Transaction Banking Banks to design their Transaction Banking products in a way that Corporates will understand and use.

The number of posts the platform has to handle can vary from 5 to 50 per second. With Auto Scaling in Amazon EC2, we can scale our infrastructure by 10 times without any problem.… As a result, we don’t pay for resources that aren’t used—and neither do our customers.
Piotr SurmaAssociate Director, Managing Director, Applica
Applica provides solutions based on artificial intelligence that allow organizations to automate the moderation of user-generated comments on their websites.
Applica’s artificial intelligence (AI) algorithms power solutions that automate business processes. Based in Poland, the company offers a semantic-moderation service that automates how user comments on websites are managed. It analyzes posts and rejects ones found to include racist or sexist phrases, swear words, threats, or any other inflammatory language. Customers of the service include large media organizations that have extensive web presences and moderation of large numbers of user posts. Three out of the top four media portals in Poland use Applica’s AI technology.
Applica’s service processes more than four million posts each month, and it does so with 96 percent accuracy— which is comparable to the accuracy of human moderators. It is a great example of how far AI solutions in the field of text analytics have come in recent years. Piotr Surma, managing director at Applica, sums up the challenge his company faces in delivering this solution: “We serve many large clients and have strict service-level agreements with each one. If we fail to meet these, it reflects badly on our company and our services, and ultimately it can lose us business.” This means the infrastructure underpinning Applica’s AI technology needs to be reliable and available 24×7.
“We experience highly unpredictable workloads. If a story breaks on a news portal run by one of our media group customers, it can generate a lot of traffic,” says Adam Dancewicz, Applica’s chief technology officer. “We must have the capacity to process these posts without a dip in responsiveness. What’s more, we need scalability that’s cost-effective. No business wants to pay for servers that aren’t being used when demand is low, and no business wants to pass this cost on to its customers.”
Applica chose to run the infrastructure supporting all its commercial client services on Amazon Web Services (AWS). “We were always going to launch our service in the cloud,” says Dancewicz. “AWS was the obvious choice to help meet our criteria of flexible capacity within a highly reliable infrastructure.”
Applica uses Amazon Simple Storage Service (Amazon S3) as its main data store and Amazon Glacier for low-cost, durable storage for large volumes of unstructured, noncritical data. Applica’s use of Auto Scaling in Amazon Elastic Compute Cloud (Amazon EC2) is critical to operations. “Auto Scaling is a key tool for us in maintaining availability. It’s extremely important that we can power up new instances when we need them and then scale them down when the load is lower,” he says. The company also takes advantage of Elastic Load Balancing, which is used together with Auto Scaling to automatically distribute traffic between instances."
Using AWS, Applica can easily scale to 10 times its normal load to cope with the increased activity that its customers’ websites experience when news stories break or major sporting events take place. Surma says, “The number of posts the platform has to handle can vary from 5 to 50 per second. With Auto Scaling in Amazon EC2, we can scale our infrastructure by 10 times without any problem. And once the conversations and excitement have died down, everything scales back down automatically. As a result, we don’t pay for resources that aren’t used—and neither do our customers.”
“We process about four million posts a month using AWS,” says Dancewicz. “If we had to do this manually, we’d be able to process just 30 percent of that amount.” Using AWS, Applica can add new customers without worrying about whether it has the ability to deliver services. Onboarding new customers is a fast process too. Compared to the days Surma says it would take in an on-premises environment, it takes only hours on AWS.
The high availability of its AWS infrastructure means Applica can easily meet its service-level agreements with clients. “Posts and comments aren’t buffered and published later,” says Surma. “To keep discussions live—and keep the user experience at the highest possible level—our technology must provide a response in milliseconds once a comment is posted. This is a vital requirement and one of the key benchmarks by which our clients judge our service.”
Another benefit is the ability to conduct fast updates without disruption. Dancewicz says, “Changes to the platform take no more than 20 minutes, but the important thing is that the service isn’t affected. With an on-premises infrastructure, this speed and ease of updating would be unthinkable.”
“By using AWS, we can deliver a cutting-edge AI service to customers,” concludes Surma. “They’re impressed with the accuracy and speed of the solution and the fact that it can help them boost operational efficiency. For example, one of our large media customers was able to cut a team of moderators from 12 to 2, redeploying them to higher-value work that benefits its business more. It great to hear this type of feedback from clients."
By building the Aircel Backup app on AWS, we reduced development time by about 60%.
Dr. Uttam KumarSenior General Manager – IT, Aircel
Aircel is an India mobile service provider with a strong focus on delivering innovative services to its customers. The company is a pan-India 2G operator with 3G spectrum in 13 India states. The company has won numerous awards including Voice & Data Special Leadership Recognition in the ‘Customer Service’ category at the ET Telecom Awards 2014 in India. Aircel is headquartered in Gurgaon in the state of Haryana, India.
India is one of the largest and fastest-growing smartphone markets in the world with 300 million smartphone users and more than 27.5 million devices sold in the second quarter of 2016, as reported by IDC. As a network provider competing for the millions of smartphone users in India, Aircel seeks to distinguish itself from other network providers through its portfolio of services and price plans.
As part of its portfolio of services, Aircel provides the Aircel e-money platform, which enables customers to recharge mobile data cards, pay bills, make payments, and transfer funds to any bank account. To better support continued development, Aircel wanted to migrate the Aircel e-money platform from a hosted environment to the cloud. The company also wanted to launch Aircel Backup, an Android-based mobile app which would provide customers with 2 gigabytes of free storage for files, messages, audio, and video for their mobile devices.
For both products, Aircel needed a reliable cloud infrastructure, scalable enough to traffic peaks. Crucially for Aircel, the cloud’s security would need to support regulatory guidelines for electronic wallets from the Reserve Bank of India (RBI), India’s central banking monetary authority. Dr. Uttam Kumar, senior general manager – IT, says, “Besides RBI compliance, scalability, and reliability, the backend IT for our apps had to be cost-effective. We wanted to avoid any management overheads. Our aim was to work with an infrastructure that enabled us to focus on product development rather than day-to-day administration and operations.”
Aircel looked to engage with a cloud-service provider that could deliver the requirements for the backend infrastructures of the Aircel e-money platform and Aircel Backup app. Dr. Kumar determined that Amazon Web Services (AWS) could help Aircel meet the RBI security guidelines for e-money services. “I saw that AWS Cloud had proven itself in terms of its reliability and security by other providers of electronic wallets. Furthermore, AWS offered very competitively priced storage and this made our Aircel Backup idea viable.”
Aircel began work with the AWS team along with To The New, an AWS Partner Network (APN) Advanced Consulting Partner, to design and build the backend infrastructure for Aircel Money. For their storage app, Aircel worked directly with the AWS team. To The New would also manage the AWS infrastructure for the Aircel e-money platform on behalf of Aircel. Says Dr. Kumar, “We followed a recommended architecture for our AWS infrastructure, which meant there were no technical issues with both solutions. It was a very smooth process when the apps went live.”
Both the Aircel e-money service and Aircel Backup app run on Amazon Elastic Compute Cloud (Amazon EC2) instances. This includes the apps’ web servers and databases. Storage for the Aircel Backup app is provided via Amazon Simple Storage Service (Amazon S3). The company uses Amazon CloudWatch to monitor the performance of both apps and to raise an alarm if performance falls below set thresholds. It also uses AWS CloudTrail to record application programming interface (API) calls and to deliver log files.
Aircel successfully migrated the Aircel e-money service to AWS and launched Aircel Backup. Says Dr. Kumar, “By building the Aircel Backup app on AWS, we reduced development time by about 60 percent.” The AWS infrastructure behind both apps delivers 99.999 percent availability—crucial to Aircel customers. Dr. Kumar says, “The availability of our AWS infrastructure is vital because we would be liable for financial penalties from RBI if there were downtime. It’s also important to remember that, above all else, customers are looking for reliability from our apps. The stability of the AWS infrastructure enables us to deliver a high level of customer satisfaction.”
“We can easily auto-scale our AWS infrastructure to meet traffic peaks on the Aircel e-money platform and then scale the infrastructure down during quieter times,” says Kumar. In addition, because Aircel pays only for the AWS resources it consumes, the AWS infrastructure is highly cost-effective. “Compared with an on-premises solution, AWS has significantly reduced our costs,” Dr. Kumar says. Taking advantage of the AWS Cloud also gives his team of developers the flexibility to work more efficiently than they could if they were using an on-premises infrastructure. Says Dr. Kumar, “We have increased our speed of software deployment by more than 50 percent, helping us drive innovation.”
Coinbase, a growing bitcoin wallet and exchange service headquartered in San Francisco, is the largest consumer bitcoin wallet in the world and the first regulated bitcoin exchange in the United States. Bitcoin is a form of digital currency that is created and stored electronically. The company, which supports 3 million global users, facilitates bitcoin transactions in 190 countries and exchanges between bitcoin and flat currencies in 26 countries. In addition to its wallet and exchange services, Coinbase offers an API that developers and merchants can use to build applications and accept bitcoin payments.
Since its founding in 2012, Coinbase has quickly become the leader in bitcoin transactions. As it prepared to respond to ever-increasing customer demand for bitcoin transactions, the company knew it needed to invest in the right underlying technology. “We’re now in the phase of legitimizing this currency and bringing it to the masses,” says Rob Witoff , director at Coinbase . “As part of that, our core tenets are security, scalability, and availability.”
Security is the most important of those tenets, according to Witoff . “We control hundreds of millions of dollars of bitcoin for our customers, placing us among the largest reserves in our industry,” says Witoff . “Just as a traditional bank would heavily guard its customers’ assets inside a physical bank vault, we take the same or greater precautions with our servers.”
Scalability is also critical because Coinbase needs to be able to elastically scale its services globally without consuming precious engineering resources. “As a startup, we’re meticulous about where we invest our time,” says Witoff . “We want to focus on how our customers interact with our product and the services we’re offering. We don’t want to reinvent solutions to already-solved foundational infrastructure.” Coinbase also strives to give its developers more time to focus on innovation. “We have creative, envelope-pushing engineers who are driving our startup with innovative new services that balance a delightful experience with uncompromising security,” says Witoff . “That’s why we need to have our exchange on something we know will work.”
Additionally, Coinbase sought a better data analytics solution. “We generate massive amounts of data from the top to the bottom of our infrastructure that would traditionally be stored in a remote and dated warehouse. But we’ve increasingly focused on adopting new technologies without losing a reliable, trusted core,” says Witoff . “At the same time, we wanted the best possible real-time insight into how our services are running.”
To support its goals, Coinbase decided to deploy its new bitcoin exchange in the cloud. “When I joined Coinbase in 2014, the company was bootstrapped by quite a few third-party hosting providers,” says Witoff . “But because we’re managing actual value and real assets on our machines, we needed to have complete control over our environment.”
Coinbase evaluated different cloud technology vendors in late 2014, but it was most confident in Amazon Web Services (AWS). In his previous role at NASA’s Jet Propulsion Laboratory, Witoff gained experience running secure and sensitive workloads on AWS. Based on this, Witoff says he “came to trust a properly designed AWS cloud.”
The company began designing the new Coinbase Exchange by using AWS Identity and Access Management (IAM), which securely controls access to AWS services. “Cloud computing provides an API for everything, including accidentally destroying the company,” saysWitoff . “We think security and identity and access management done correctly can empower our engineers to focus on products within clear and trusted walls, and that’s why we implemented an auditable self-service security foundation with AWS IAM.” The exchange runs inside the Coinbase production environment on AWS, powered by a custom-built transactional data engine alongside Amazon Relational Database Service (Amazon RDS) instances and PostgreSQL databases. Amazon Elastic Compute Cloud (Amazon EC2) instances also power the exchange.
The organization provides reliable delivery of its wallet and exchange to global customers by distributing its applications natively across multiple AWS Availability Zones.
Coinbase created a streaming data insight pipeline in AWS, with real-time exchange analytics processed by an Amazon Kinesis managed big-data processing service. “All of our operations analytics are piped into Kinesis in real time and then sent to our analytics engine so engineers can search, query, and find trends from the data,” Witoff says. “We also take that data from Kinesis into a separate disaster recovery environment.” Coinbase also integrates the insight pipeline with AWS CloudTrail log files, which are sent to Amazon Simple Storage Service (Amazon S3) buckets, then to the AWS Lambda compute service, and on to Kinesis containers based on Docker images. This gives Coinbase complete, transparent, and indexed audit logs across its entire IT environment.
Every day, 1 TB of data—about 1 billion events—flows through that path. “Whenever our security groups or network access controls are modified, we see alerts in real time, so we get full insight into everything happening across the exchange,” says Witoff . For additional big-data insight, Coinbase uses Amazon Elastic MapReduce (Amazon EMR), a web service that uses the Hadoop open-source framework to process data, and Amazon Redshift, a managed petabyte-scale data warehouse. “We use Amazon EMR to crunch our growing databases into structured, actionable Redshift data that tells us how our company is performing and where to steer our ship next,” says Witoff .
All of the company’s networks are designed, built, and maintained through AWS CloudFormation templates. “This gives us the luxury ofversion-controlling our network, and it allows for seamless, exact network duplication for on-demand development and staging environments,” says Witoff . Coinbase also uses Amazon Virtual Private Cloud (Amazon VPC) endpoints to optimize throughput to Amazon S3, and Amazon WorkSpaces to provision cloud-based desktops for global workers. “As we scale our services around the world, we also scale our team. We rely on Amazon WorkSpaces for on-demand access by our contractors to appropriate slices of our network,”Witoff says.
Coinbase launched the U.S. Coinbase Exchange on AWS in February 2015, and recently expanded to serve European users.
Coinbase is able to securely store its customers’ funds using AWS. “I consider Amazon’s cloud to be our own private cloud, and when we deploy something there, I trust that my staff and administrators are the only people who have access to those assets,” says Witoff . “Also, securely storing bitcoin remains a major focus area for us that has helped us gain the trust of consumers across the world. Rather than spending our resources replicating and securing a new data center with solved challenges, AWS has allowed us to hone in on one of our core competencies: securely storing private keys.”
Coinbase has also relied on AWS to quickly grow its customer base. “In three years, our bitcoin wallet base has grown from zero to more than 3 million. We’ve been able to drive that growth by providing a fast, global wallet service, which would not be possible without AWS,” says Witoff .
Additionally, the company has better visibility into its business with its insight pipeline. “Using Kinesis for our insight pipeline, we can provide analytical insights to our engineering team without forcing them to jump through complex hoops to traverse our information,” says Witoff . “They can use the pipeline to easily view all the metadata about how the Coinbase Exchange is performing.” And because Kinesis provides a one-to-many analytics delivery method, Coinbase can collect metrics in its primary database as well as through new, experimental data stores. “As a result, we can keep up to speed with the latest, greatest, most exciting tools in the data science and data analytics space without having to take undue risk on unproven technologies,” says Witoff .
As a startup company that built its bitcoin exchange in the cloud from day one, Coinbase has more agility than it would have had if it created the exchange internally. “By starting with the cloud at our core, we’ve been able to move fast where others dread,” says Witoff . “Evolving our network topology, scaling across the globe, and deploying new services are never more than a few actions away. This empowers us to spend more time thinking about what we want to do instead of what we’re able to do.” That agility is helping Coinbasemeet the demands of fast business growth. “Our exchange is in hyper-growth mode, and we’re in the process of scaling it all across the world,” says Witoff . “For each new country we bring on board, we are able to scale geographically and at the touch of a button launch more machines to support more users.”
By using AWS, Coinbase can concentrate even more on innovation. “We trust AWS to manage the lowest layers of our stack, which helps me sleep at night,” says Witoff . “And as we go higher up into that stack—for example, with our insight pipeline—we are able to reach new heights as a business, so we can focus on innovating for the future of finance.”
When people hear that Amazon is on the verge of concluding an enterprise-level, multiyear initiative to move the company’s data from Oracle databases onto Amazon Web Services (AWS), this question might come to mind: Why wasn’t the online retail powerhouse, known for its use of leading-edge technologies, already taking advantage of the variety, scale, reliability, and cost-effectiveness of AWS—especially considering that the two are part of the same company?
The first part of the answer is that Amazon was born long before AWS, in an era when monolithic, on-premises database solutions like Oracle still made the most sense for storing and managing data at enterprise scale. And, the second is that—even though that era is now over—there are big obstacles to disengaging from Oracle, as many enterprises that want to shift to AWS know all too well.
“If a company like Amazon can move so many databases used by so many decentralized, globally distributed teams from Oracle to AWS, it’s really within the reach of almost any enterprise."
Thomas Park, Senior Manager of Solution Architecture for Consumer Business Data Technologies, Amazon
In Amazon’s case, obstacles to leaving Oracle included the size of the company’s fleet—more than 5,000 databases connected to a variety of non-standardized systems, with ownerships and dependencies that were not centrally inventoried. There were personnel-related risks as well. The careers of many Amazon employees were based on Oracle database platforms. Would they fully support the move? Would some just leave?
Similar challenges face the many other companies that want to switch from Oracle to AWS. Just like those other companies, Amazon had urgent reasons to make it work. Amazon engineers were wasting too much time on complicated and error-prone database administration, provisioning, and capacity planning. The company’s steep growth trajectory—and sharply rising throughput—required more and more Oracle database shards, with all the added operations and maintenance overhead those bring. And then there were the costs: business as usual on Oracle would increase the millions of dollars Amazon was already paying for its license: a jaw-dropping 10 percent a year.
"It was the same situation for us as it is for so many enterprises," says Thomas Park, senior manager of solutions architecture for Consumer Business Data Technologies at Amazon.com, who helped lead the migration project. "Oracle was both our biggest reason for, and most significant obstacle against, shifting onto AWS."
That was then. Today, Amazon stands on the verge of completing the migration of about 50 petabytes of data and shutting down the last of those 5,000 Oracle databases. How did the company pull off this massive migration?
Amazon faced two key challenges during the migration. One was how to tackle the large-scale program management necessary to motivate its diverse, globally distributed teams to embrace the project and track its progress. The other was the technical complexity of the migration. For the project to be successful, it was clear that the company’s business lines would need centralized coordination, education, and technical support.
To overcome these challenges, Amazon began by creating an enterprise Program Management Office (PMO), which set clear performance requirements and established weekly, monthly, and quarterly reviews with each service team to track and report progress and program status.
"In establishing the program we had to clearly define what we were trying to achieve and why, before we addressed the ‘how,’” says Dave George, Amazon’s director of Consumer Business Data Technologies. “Once we established the ‘what’ and the ‘why,’ we established clear goals with active executive sponsorship. This sponsorship ensured that our many distributed tech teams had a clear, unambiguous focus and were committed to deliver these goals. Relentless focus on delivery ensured that disruption to other business priorities was minimized while achieving a significant architectural refresh of core systems.”
Also key to the project’s success was an AWS technical core team of experienced solutions architects and database engineers. This team made recommendations as to which AWS services would be best suited for each category of Amazon data being migrated from Oracle, including:
This team also provided formal instruction about specific AWS services, ran hands-on labs, offered one-on-one consultations and coordinated direct assistance by AWS product teams for Amazon businesses experiencing specific challenges.
"Having this central team staffed with experienced solutions architects and database engineers was crucial to the project’s success," says Park. “The team not only helped educate Amazon business teams but provided feedback and feature requests that made AWS services even stronger for all customers."
Amazon also thought carefully about how best to help its Oracle database administrators transition onto the new career paths now open to them. One option was to help them gain the skills necessary to become AWS solutions architects. Another was a managerial role in which an Oracle background would be helpful during the ongoing process of bridging traditional Oracle-based environments and AWS Cloud environments.
Migrating to AWS has cut Amazon’s annual database operating costs by more than half, despite having provisioned higher capacity after the move. Database-administration and hardware-management overhead have been greatly reduced, and cost allocation across teams is much simpler than before. Most of the services that were replatformed to Amazon DynamoDB—typically the most critical services, requiring high availability and single-digit millisecond latency at scale—saw a 40-percent reduction in latency, despite now handling twice the volume of transactions. During the migration, service teams also took the opportunity to further stabilize services, eliminate technical debt, and fully document all code and dependencies.
Reflecting on the scope of the project—a migration that affected 800 services, thousands of microservices, tens of thousands of employees, and millions of customers, and that resulted in an AWS database footprint for Amazon larger than for 90 percent of its fellow AWS customers—Amazon.com sees a lesson for other large enterprises contemplating a similar move.
"No one involved with this migration project would say it was simple, easy, or fun, but it didn’t take superpowers, either. If a company like Amazon can move so many decentralized, globally distributed databases from Oracle to AWS, it’s really within the reach of almost any enterprise."
Amazon Aurora was the easiest part of the migration. It never gave us the slightest problem.
Josh GageSenior Software Development Engineer, Amazon.com
About Amazon.comAmazon.com is the world’s leading online retailer and the pioneer of customer reviews, 1-Click shopping, personalized recommendations, Prime, AWS, Kindle, Alexa, and many more products and services. Benefits
AWS Services Used
As Amazon.com grew from a one-person startup in 1994 to one of the leading e-commerce sites in the world today, the company overcame challenge after challenge. Success brings its own challenges, though, and now the company faces one that will only intensify the more successful Amazon becomes.
"As one of the world’s largest online retailers, Amazon is also one of the world’s largest targets for online fraud," says Balachandra Krishnamurthy, a software development manager on the Amazon Transaction Risk Management Services (TRMS) team. "Customers make hundreds of purchases per second on our website and mobile app, and every one of those transactions must be screened for fraud."
To do this, the TRMS team’s Buyer Fraud Service (BFS) system collects more than 2,000 real-time and historical data points for each order and uses machine-learning algorithms to detect and prevent those with a high probability of being fraudulent. BFS prevents millions of dollars in fraudulent transactions every year.
"We put immense resources into this fight, not only to protect Amazon’s bottom line but also to maintain the high trust our customers and sellers place in us," says Krishnamurthy. "Amazon has a reputation as a platform with very high security standards, and we are committed to upholding that reputation every second of every day."
With this commitment in mind, TRMS decided to migrate to Amazon Web Services (AWS) the more than 100 on-premises Oracle databases in which it stored the 40 TB of data its machine-learning models use to identify fraudulent transactions.
Running on Oracle posed many challenges for TRMS, including complicated database administration that required the full-time attention of three engineers. The TRMS team also experienced latency levels under peak loads that were not acceptable for it to operate effectively; these issues required complex, multiyear engineering projects to address. Finally, the team spent 100 hours provisioning hardware in 2017, not including installation and testing—time it hoped to allocate to more strategic work.
Because the Buyer Fraud Service is a critical application and must operate at 99.995 percent availability, TRMS decided to use PostgreSQL-compatible Amazon Aurora as the new platform to host its databases. Amazon Aurora, a cloud database service that also offers MySQL compatibility, combines the performance and availability of Oracle with the simplicity of open-source databases and is three times faster than standard PostgreSQL databases.
As strong as the case was for moving to Amazon Aurora, the team knew that migrating a large-scale system that operates at such high throughput and availability would also pose significant challenges. "The daunting part of this migration was having to move such a large database, with the number of transactions it handles, with minimal downtime," says Josh Gage, a senior software development engineer on the TRMS team. "At 40 TB, we were the largest database migration to AWS in the history of the company."
To minimize the technical complexity of the migration, TRMS decided to re-platform the Buyer Fraud Service and postpone re-architecting it. "We decided we wanted to re-platform so as to accelerate the migration as much as possible while minimizing disruptions," says Krishnamurthy. "We will look at further optimizing the service design and database schemas at a later phase."
To accomplish the project quickly and securely, the team used a migration stack that included AWS Database Migration Service (AWS DMS), which supports migrations to and from leading commercial and open-source databases. During migrations, AWS DMS automatically replicates any changes in the source data to the target database, so the source database can remain operational until the final switchover.
Despite the massive amount of data being moved, the migration project required only six months to complete and one hour of downtime. Gage gives much of the credit for the successful project to the flexibility and ease of use of Amazon Aurora. For example, the ability to create Aurora Read Replicas was a big help during the migration.
"As we were migrating, we were able to spin up new instances of our database that were fully synced with the Oracle database in about an hour," says Gage. "That gave us the flexibility to experiment with new approaches and find just the right ones."
Krishnamurthy says he is more than happy with the performance and stability of the new solution. "Things have been running very smoothly on Aurora since the migration," he says. "There have been zero database outages, and we no longer have to worry about the execution plan flips we used to experience on Oracle."
Now that AWS is responsible for most management tasks—such as patching, maintenance, backups, and upgrades—engineers can turn their attention to more valuable work. "We used to need three database engineers to keep Oracle up to date and take care of performance improvement tasks like repartitioning and index tuning," says Gage. "Because Aurora reduced our administrative overhead by about 70 percent, we don’t need those resources just to keep our heads above water and can shift them to more valuable tasks."
The migration also resulted in considerable cost savings. "Another big benefit of the migration is the lower cost of database hosting on AWS," says Krishnamurthy. "On Amazon Aurora, we see performance levels similar to what we saw on Oracle at less than half the cost."
And, using AWS, the team no longer has any concerns about scale. "After the migration, we load-tested the new and the old systems up to 900 transactions per second per shard,” says Gage. "Aurora had no problem handling the load, with minimal CPU usage, while Oracle browned out. We also had no problems in any of our regions on Amazon Prime Day, showing that Aurora can handle our peak traffic with ease."
Gage says the project was much less challenging than some might have expected. "In the end, migrating from Oracle onto AWS turned out to be pretty simple. We hit obstacles, but we were able to overcome them either on our own or with the help of the AWS DMS team, who really provided exemplary support of their product. Amazon Aurora was the easiest part of the migration. It never gave us the slightest problem."
Unfortunately for our new AI overlords, the crusade to take over the world has been stopped in its tracks by an unlikely hurdle: a 16-year-old’s math test.
Faced with the same level of exam that a 16-year-old in the U.K. would take, according to a new paper by Google’s DeepMind, its cutting-edge AI flunked.
The algorithm was trained on the sorts of algebra, calculus, and other types of math questions that would appear on a 16-year-old’s math exam according to the U.K. national curriculum, according to DeepMind research published online on Tuesday.
The researchers tested several types of AI and found that algorithms struggle to translate a question as it appears on a test, full of words and symbols and functions, into the actual operations needed to solve it, according to an article on Medium.
It turns out, according to the research, that even a simple math problem involves a great deal of brainpower, as people learn to automatically learn to make sense of mathematical operations, memorize the order in which to perform them, and know how to turn word problems into equations.
But artificial intelligence is quite literally built to pore over data, scanning for patterns and analyzing them. In that regard, the results of the test — on which the algorithm scored a 14 out of 40 — aren’t reassuring.
For those businesses looking to move all or part of their workload to the cloud, trying to sort through the myriad of options can be an incredibly daunting task. While there are seemingly as many cloud providers as clouds dotting the sky, most enterprises will eventually find themselves staring down the dilemma of Microsoft versus Amazon. Amazon’s AWS and Microsoft’s Azure have a collective stranglehold on the market with a 49% share (32% for AWS and 17% for Azure). All other cloud providers can only hope to pickup a few slivers of the pie these behemoths haven’t already gorged on.
AWS enjoyed first mover status as they flat out dominated the cloud landscape from 2002 to 2009. Microsoft entered the fray in 2010 and is winning the growth battle (76% year-over-year versus 46% for AWS). With that said, it will take a feat of Herculean proportion to dethrone the current 800-lb gorilla in this space. As we all know, size doesn’t necessarily translate into the right solution for your business. Let’s take a look at 6 ways Azure is beating AWS at their own game.
1) Hybrid Cloud
Unless you are a new startup ready to set the world on fire, you have legacy applications to contend with. Throwing everything up into the cloud day one just isn’t a feasible option for most businesses. Due to configuration constraints, some applications may have a hard time leaving the cozy confines of the existing on prem data center. Those are just the realities many IT professionals are faced with, and for companies in this situation a hybrid cloud that balances Azure with on prem can make a lot of sense.
Microsoft has made the hybrid cloud their bread and butter. Azure Stack, Hybrid SQL Server and others hybrid services highlight their heavy commitment to this model which has made them the clear leader in hybrid cloud. Amazon certainly isn’t ignoring this space, but this is one of the few places they are lagging behind in the cloud wars.
2) Integration with Microsoft Products
If your enterprise is already running Windows Server, SQL Server, Exchange, Active Directory and other core Microsoft products to service critical aspects of your business, its only logical to plug Azure into that existing ecosystem. While most of these services will integrate just fine with AWS, Microsoft has built a seamless integration experience between its product line and Azure. When you consider all the headaches that come built-in with a server migration, wouldn’t it be nice knowing your core infrastructure will work right out of the gate?
3) Intuitive Use
Want to feel instantly overwhelmed? Log into the AWS portal and start clicking around. Within a few seconds you’ll be asking yourself, “What is an Elastic Beanstalk? Is Amazon going to put magic beans in my code?” Unfortunately, no magic is going to save your code, and there is no Jack or scary giant to be found up in this mythical cloud.
Amazon’s sprawling catalog of service offerings can quickly drown a new user in confusion. It can be difficult to determine what Amazon offers not to mention where to find it even if you did know. Amazon’s cheeky naming conventions make products like S3, Elasticache, Redshift, Kinesis and Glacier seem like completely foreign entities. By comparison, is there any confusion as to what an Azure Virtual Machine is? It takes time to learn the AWS nomenclature whereas Azure’s is more intuitive in nature, allowing you to ditch part of the learning curve and get to work.
4) PaaS
Platform as a Service provides a framework for developers to code their applications on top of. PaaS factors in the development, testing and deployment of applications in a logical, cost-effective way. This can boost developer productivity and ultimately reduce the department’s time-to-market for its applications.
Microsoft particularly excels in PaaS. To name a few of Azure’s PaaS offerings, Azure Web Apps allow developers to launch a web environment instantly with high availability that supports a host of languages out of the box (ASP.NET, Node.js, PHP, Java, Ruby and Python). Azure Mobile Apps provide a backend for your iOS and Android mobile applications to plug into, adding features like offline sync, push notifications, auto scaling and high availability. Azure Functions are serverless background jobs or microservices that are triggered by an outside service like Azure Service Bus. Azure Functions amount to a small piece of code that requires no plumbing to get up and running.
5) Enterprise Agreement
The Enterprise Agreement may be Microsoft’s ace in the hole when it comes to pricing out its competitors. Just like your insurance agent roping you in to that home-auto policy combo, bundling Azure services with your existing agreement will get you discounted rates. For the budget conscious company, this can help sell the cloud move a bit easier to upper management.
6) Avoid Vendor Lock-in
Sometimes it is not even about Amazon versus Azure. Increasingly, more companies are opting for a multi-cloud approach. According to a recent survey by 451 research, 69% of organizations are planning to go multi-cloud in the coming year. While the idea may seem peculiar on the surface, it has a lot of benefits once you drill down to the details.
With multi-cloud, you can improve reliability by providing fail over in the rare event of a vendor outage or DDoS attack. It gives the IT department leverage and options to run processes where they are most cost effective as well as taking advantage of critical features that may be spread across separate cloud vendors. You also aren’t dependent on your cloud provider if they jack up prices or suffer performance issues after you’ve finished migrating over to them.
On the whole, you can’t really go wrong with AWS or Azure as your cloud provider. Both are best in class and will continue to innovative and lead the cloud space for years to come. For Microsoft shops or those organizations needing a hybrid cloud option, Azure offers some clear benefits over AWS that need to be strongly considered.
Every company is part of a larger innovation ecosystem, all of which play a role in ensuring that your product is accepted by the market
Gracy Fernandez
CEO-founder, Graventure
March 29, 2019 4 min read
Founders in Asia Pacific are always in such a rush to launch their product that many often fail to realize going to market too soon can also be a problem. Such products, as the saying goes, are ahead of their time. We need to look no further than the dotcom busts of the early 2000s, many of which re-emerged later on as successful ventures for the social Web, to see this truth in action.
What, then, is the root cause of a venture to be too early? The reality is that, as much as founders like to think of themselves as visionaries who can will anything into being, their startup does not exist in a vacuum. Every startup is part of a larger innovation ecosystem, all of which play a role in ensuring that your solution is accepted by the market.
The idea that startups can fail if other parts of the innovation ecosystem are not yet ready was outlined by authors Ron Adner and Rahul Kapoor in their seminal 2016 article “Right Tech, Wrong Time” in the Harvard Business Review. For entrepreneurs in the Asia Pacific, this principle is best seen by analysing the different parts of an innovation ecosystem in an emerging tech sub-sector in a particular country.
Let’s thus hone in on the agri-tech industry in the Philippines, which has a lot of promise, but still needs the advancement of several stakeholders.
Farmers Need to be Upskilled
There are many promising agri-tech startups operating in the Philippines. To take advantage of innovative agri-tech solutions, farmers need to be upskilled. More specifically, they need to be taught digital literacy. As surprising as it may sound, many don’t know how to operate a smartphone, or in many cases, even a mobile phone.
The private and public spheres should take it upon themselves to educate farmers on the use of mobile technologies, as this kind of digital literacy is what will enable them to take advantage of the next generation of agri-tech products. If farmers remain unfamiliar with mobile technology, they will also remain untouched with the advantages of agri-tech.
Agri-tech Needs to Professionalize
Industry associations exist more for leaders to just schmooze. They also play a key role in professionalizing the industry, such as by establishing common standards and launching joint initiatives. In the fin-tech space in the Philippines, for example, the industry association, FinTech Philippines Association, influences national policy on issues relevant to fin-tech, such as user privacy.
In much the same way, agri-tech startups in the Philippines need to band together to steer the future of the industry, particularly as it relates directly to farmers. What guidelines must be established to create a win-win collaboration between agri-tech and farmers on the front lines? Agri-tech leaders must wrestle with tough questions like these, if we wish for farmers to get the most out of their solutions.
The Government Needs to Step in
While agriculture may not be as heavily regulated an industry as say fin-tech, the government still has a major role in shaping its future. What good, after all, will the latest agri-tech solutions be, if the farmers themselves have an outdated basic equipment and infrastructure?
Some Filipino policymakers are pitching in. Former Senator Serge Osmena, for example, is advocating for grants and subsidies to Filipino farmers that will allow them to improve their tools and equipment, and in turn, their yields and income.
“Besides, the new generation does not wish to stay and till the land. The world beckons. There are so many opportunities out there,” said Osmena.
Such modernization of farming methods, infrastructure, and even best practices can only come about through collaboration from all stakeholders. The individual parts of the innovation ecosystem, in short, need to realize they are part of a much larger whole.
The agri-tech industry in the Philippines, in short, has plenty of promise. But, fully realizing this potential requires more than just visionary founders willing to execute, as holds true with all tech sub-sectors across Asia Pacific. The entire innovation ecosystem, including the end users, the industry associations, and the government agencies, must work together to facilitate the future.