Myth # 1: It is only for tech companies. Nothing is far from the truth as this myth, any company in the horizontal and vertical markets can use it including no matter what is the size.


Myth # 2: Security is the biggest risk. Security measures used by well-known cloud vendors are often better than their clients; the cloud vendors have the resources and the skills to keep it up to date.


Myth # 3: Everything works better in the Cloud. Except old applications that were designed to run on dedicated servers, often difficult to run on the cloud.


Myth # 4: It is always cheaper to run in the Cloud. It is not always cheaper to run on the cloud, but it can often be more cost efficient. Cloud works best for variable demands and workloads, where you have high demand at times but lower demand at others.


Myth # 5: Cloud is harmful to the environment. There’s no question that data centers consume huge amounts of energy. But when businesses move from on-site facilities to consolidated cloud data centers, it saves energy and cuts pollution.


Myth # 6: Cloud costs jobs. Instead of taking jobs it is in fact creating them, industry predictions suggesting that by the end of 2015 cloud computing will have created more than 13 million jobs worldwide. It required a host of cloud-savvy experts whose skills and knowledge will maintain and strengthen growth and development.


Myth # 7: Migrating into the Cloud is more hassle than it is worth. If you work in partnership with a trusted and experienced hosting provider it’s a seamlessly process. It can all happen very quickly with minimal downtime.


Myth # 8: Cloud Is Not for Mission-Critical Use. Cloud computing can be used for all aspect of business including Mission-Critical applications for many reasons including less downtime, and auto backup.


Myth # 9: Cloud is virtualization. Virtualization is software that manipulates hardware, while cloud computing refers to a service that results from that manipulation.


Myth # 10: I’ll be caught by Vendor ‘lock in’. This is true only to the same extent of on-premise, traditional software. There would be nothing to stop businesses building their own applications and deal with more than one vendor.

Reference cards (cheat sheets) collection.

Ignite In-Memory Data Fabric stores data in-memory as opposed to traditional Database Management Systems that use disk as their primary storage mechanism. Image credit: Apache Ignite.

In-memory computing (IMC) is replacing hard drives as the primary storage system of choice for many enterprises where speedy processing counts. As the price gap narrows between RAM – the solid-state system memory “sticks” that plug directly into slots on the motherboard – and solid-state or rotating-media hard drives, which “talk” with the CPU via slower interconnects it drives the trend toward IMC. IMC “treats RAM as primary storage,” keeping gigabytes of data in RAM — often for months or years (usually saving copies to SSD or hard drives, of course).

One of the leading software tools for enabling IMC has been In-Memory Data Fabric from GridGain Systems, Inc.Nikita Ivanov (now CTO) and Dmitriy Setrakyan, EVP of Engineering, began developing the software in 2005, co-founding GridGain to sell an enterprise edition of the software that included support and additional features.

In 2014, GridGain adopted the Apache 2.0 license for its open source version, and made the code available through sites including GitHub, leading to a 20x increase in downloads. In late 2014, the company announced that the Apache Software Foundation had accepted into the Apache Incubator program, under the name Apache Ignite, and in early 2015, GridGain announced the migration of the open source code to the ASF site.

GridGain currently plans to do a more complete roll-out, including talks and training sessions, at the upcoming ApacheCon North America, April 13-16, in Austin, Texas. Meanwhile, here’s a quick look at Apache Ignite, in-memory computing and what it means for developers and users, GridGain, and how you can try Apache Ignite now, and get involved in its development.

March of the podlings

Doing this is a natural next step for the software, according to GridGain’s Ivanov.

“We wanted to bring (the open source core of) our code entirely within the Apache umbrella,” says Ivanov. “We feel this will help drive even more adoption of an open-source In-Memory Data Fabric by other Apache developers and users, including those people who have been looking to get real-time performance out of Hadoop.”

“’Incubating’ means that Ignite is not yet fully endorsed by the ASF, but is, like penguins marching to their breeding grounds, at a stage along the ASF’s well-defined incubation process from “establishment” to “project.”

Apache Ignite mentor Konstantin Boudnik said, “The project is off to a great start. The community is working on Apache Ignite v1.0 and is aiming to include new automation and ease-of-use features to simplify deployment. Other features in the works include support for JCache (JSR-107), which is a new standard for Java in-memory object caching; Auto-Loading of SQL data; and dynamic cache creation on the fly.”

GridGain will continue to offer commercialized versions of the software, which, like many open source-based companies, will offer additional features plus support.

“We are already in the process of building the next version of our commercial enterprise edition, Version 7.0, on top of Apache Ignite,” says Ivanov.

In-Memory Computing Isn’t New, But Lower Prices and Bigger Capacities Are

The motivation for doing In-Memory Computing is simple: It takes much less time for the processors to read from and write to RAM socketed on the motherboard versus hard or solid-state drives, where data has to traverse the data bus (and for rotating mechanical media, suffer access delays).

“Traditional computing uses disks as primary storage and memory as a primary cache for frequently-accessed data,” says Ivanov. “In-memory computing turns that upside down. Keeping data in RAM helps reduce latencies and increase application performance. This lets companies meet needs for ‘Fast Data’ — computation and transactions on large data sets in real time, in a way that traditional disk and flash storage can’t come close to matching.”

The notion of keeping primary data in RAM isn’t new. “RAM disk” software was available for early personal computers like the Commodore 64 and Apple II, and for DOS, and it’s available today for Linux (e.g., shm and RapidDisk), and for Windows 8.

What is relatively new, however, is the increasing affordability of RAM — a quick check at NewEgg.com shows 32 gigabyte sticks of server-grade RAM at $10 to $20 per GB — and of cluster and multi-processor architectures that can scale up and out to accommodate terabytes (TB) of RAM.

“You can buy a 10-blade server that has a terabyte of RAM for less than $25,000,” according to Ivanov. And, says Ivanov, while that much RAM does push up the initial price, “Because of RAM’s lower power and cooling costs, and no moving parts to break, analysts say that the TCO (Total Cost of Ownership) for using RAM instead of rotating or solid-state storage as primary storage breaks even in about three years. And that’s just looking at TCO, not including the delivered value from getting much faster processing performance.”

GridGain’s In-Memory Data Fabric is one of several different software solutions to allow “RAM as primary storage” architectures. “Our software slides logically and architecturally above your database and beneath your application,” says Ivanov. “The goal is to give applications high performance and high scalability compared to using disk-based storage.”

Who is using — and will be using — in-memory computing? Anyone looking to crunch data faster. This includes not just financial services and bio-informatics, but also, says Ivanov, “As ‘big data meets fast data,’ we are seeing new use cases like hyperlocal advertising, fraud protection, in-game purchasing for online games, and SaaS-enabling premise-based applications while maintaining SLA and multi-tenancy goals.”

GridGain reports hundreds of deployments using GridGain’s In-Memory Data Fabric, according to the company, at major companies and organizations including Apple, Avis, Canon, E-Therapeutics, InterContinental Hotels Group, Moody’s KMV, Sony, Stanford University, and TomTom.

And, notes Ivanov, “While enterprises probably look at using in-memory computing on systems starting at one-quarter to half a terabyte of RAM, you could be using it on a system with even just a few dozen gigabytes — it depends on how much data you want to keep in RAM.

Recent studies like one that Gartner did in 2014 found that over 90 percent of enterprise operational payloads for datasets — the data these organizations need to process every day — are less than 2TB. That’s completely in the realm of what we are seeing today. The upper limit for RAM today is probably in the 10 to 20 TB range… and that’s due more to economic issues than technical constraints.”

Using Apache Ignite, Getting Involved

Want to try Apache Ignite? Initial Apache 2.0 licensed source code can be found here.

“You can run this on your laptop, a commodity cluster, or on a supercomputer,” says Ivanov.

Want to become part of the Apache Ignite community? There’s no shortage of things you can do.

Specific initial goals will be determined by public discussion among the Apache Ignite community; likely goals, suggests Ivanov, could include:

  • Migrate the existing Ignite code base to the ASF.
  • Refactor development, testing, build and release processes to work in ASF.
  • Attract developer and user interest in the new Apache Ignite project.
  • Road map the integration efforts with “sister” projects in ASF eco-system like Storm and Spark.
  • Incorporate externally developed features into the core Apache Ignite project.

“We believe these initial goals are sufficiently difficult to be considered early milestones,” noted Ivanov in GridGain’s February 2015 announcement.

Over time, says Ivanov, “We expect Apache Ignite will become for Fast Data what Hadoop is for Big Data.”

Getting started with compute grid. Apache Ignite.

It’s that time of the year once again when lots of predictions are made for the upcoming year. In last couple of years, cloud computing has become an integral part of IT strategy across enterprises. Here are eight cloud computing trends which will drive cloud strategies throughout 2015 and impact cloud planning processes too.

1. Enterprise workloads will move to the cloud at large: Cloud migration has been in talks since quite sometime now but it’s going to be a reality very soon in 2015. It’s not only about Amazon Web Services but there are Google Compute Engine and Microsoft Azure which will make records too, along with service veterans like CenturyLink Savvis, Verizon Terremark and Rackspace.

2. Hybrid Cloud Computing: A combination of public or private cloud services and physical application infrastructure and services is called hybrid cloud computing. From some recent developments, hybrid cloud computing looks quite promising as an unified integrated cloud model across internal and external cloud platforms.

3. Cloud investment optimisation: As cloud service promise to deliver range of benefits like shift from capital-intensive to operational cost models, cloud investment optimisation is on the cards. It can also be used to shift focus of IT resources to higher-value-added activities for the business or to support business innovations. These benefits need proper investigation as lots of challenges might come in their way like security, lack of transparency, concerns about performance and availability and so on.

4. Containers will get more popularity: Cloud poses some problems in IT operations but containers have helped solve these issues. Developers are loving containers and now the operations teams also need to containerise different parts of an application, move them to different types of cloud infrastructure and manage them in parts.

5. Price leadership will see the next step: In 2015 a two-tier public cloud structure will take shape and the top tier will be Amazon, Azure, SoftLayer, and Google Compute/App Engines. But low-price, minimalist infrastructure tiers will be more popular among independent developers, startups and small businesses, like Netcraft and DigitalOcean. Lightweight, fast cloud services will be the biggest trend in 2015.

6. Cloud Friendly Decision Frameworks: Cloud computing offers lots of important features and benefits like cost-effective use-based models of IT consumption and service delivery, and it’s believed almost by everyone now. IT can also focus on new service with cloud computing. But success of cloud adoption depends on making the structure optimised according to requirements. First know the concerns and then proper planning, implementation and optimisation of cloud strategy are required.

7. Application Designs should be cloud optimised: Organisations usually transfer their enterprise workloads to the cloud or an application infrastructure. But to explore the full potential of the cloud model, applications should be designed which are cloud optimised.

8. Software-defined security will protect workloads: Software-defined security will become integral part of software-defined data centres and accompany workloads into the cloud. If the network, the storage system, and containers and virtual machines are defined on the host servers in the software, then the security can also be defined. Software mapping systems identify system perimeters and intelligence is fed into a central monitoring system.