We live in a day and age where we tend to evolve each day on all frontiers. So is the field of programming and the languages used by programmers. While Java, C++, JavaScript, PHP, Ruby etc have been there, done that; there are plenty of other languages that are now being increasingly embraced by programmers across the globe for the sheer fact that they offer a little extra to what the majors do. Here are eight such lesser known languages creating quite the storm today.

1.Racket 6.0 - Racket 6.0 : General purpose, multi-paradigm programming language in the Lisp/Scheme family. The platform provides an implementation of the Racket language (including a sophisticated run-time system, various libraries, JIT compiler, and more) along with a development environment called DrRacket written in Racket itself. Used in a variety of contexts such as scripting, general-purpose programming, computer science education, and research.

2.OCaml 4.01 - 2.OCaml 4.01 : Main implementation of the Caml programming language that extends the core Caml language with object-oriented constructs. OCaml’s toolset includes an interactive top level interpreter, a bytecode compiler, and an optimising native code compiler. It has a large standard library that makes it useful for many of the same applications as Python or Perl, as well as robust modular and object-oriented programming constructs that make it applicable for large-scale software engineering.
3.Nimrod 0.9.2 - Nimrod 0.9.2  : Statically typed, imperative programming language that tries to give the programmer ultimate power without compromises on runtime efficiency. This means it focuses on compile-time mechanisms in all their various forms. Consists of a nice infix/indentation based syntax with a powerful (AST based, hygienic) macro system and a semantic model that supports a soft realtime GC on thread local heaps.
4. Julia 0.2.1 - Julia 0.2.1 : High-level dynamic programming language designed to address the requirements of high-performance numerical and scientific computing while also being effective for general purpose programming. Julia’s core is implemented in C and C++, its parser in Scheme, and the LLVM compiler framework is used for just-in-time generation of machine code.
5.Hack 1.0 - Hack 1.0 : Programming language for the HipHop Virtual Machine (HHVM) invented by Facebook. Hack can be seen as a new version of PHP that also runs on the HHVM, but it allows programmers to use both dynamic typing and static typing.
6.Groovy 2.2 - Groovy 2.2 : Object-oriented programming language for the Java platform that is a dynamic language with features similar to those of Python, Ruby, Perl, and Smalltalk. It can be used as a scripting language for the Java Platform, is dynamically compiled to Java Virtual Machine (JVM) bytecode, and interoperates with other Java code and libraries.
7.Egison 3.3.3 - Egison 3.3.3 : Touted as the world’s first programming language that realised non-linear pattern-matching with backtracking. It enables to represent pattern-matching against lists, multisets, sets, trees, graphs and any kind of data types, directly.
8.Clojure 1.6 - Clojure 1.6 : Dialect of the Lisp programming language, Clojure is a general-purpose programming language with an emphasis on functional programming. It runs on the Java Virtual Machine, Common Language Runtime, and JavaScript engines.

It’s that time of the year once again when lots of predictions are made for the upcoming year. In last couple of years, cloud computing has become an integral part of IT strategy across enterprises. Here are eight cloud computing trends which will drive cloud strategies throughout 2015 and impact cloud planning processes too.

1. Enterprise workloads will move to the cloud at large: Cloud migration has been in talks since quite sometime now but it’s going to be a reality very soon in 2015. It’s not only about Amazon Web Services but there are Google Compute Engine and Microsoft Azure which will make records too, along with service veterans like CenturyLink Savvis, Verizon Terremark and Rackspace.

2. Hybrid Cloud Computing: A combination of public or private cloud services and physical application infrastructure and services is called hybrid cloud computing. From some recent developments, hybrid cloud computing looks quite promising as an unified integrated cloud model across internal and external cloud platforms.

3. Cloud investment optimisation: As cloud service promise to deliver range of benefits like shift from capital-intensive to operational cost models, cloud investment optimisation is on the cards. It can also be used to shift focus of IT resources to higher-value-added activities for the business or to support business innovations. These benefits need proper investigation as lots of challenges might come in their way like security, lack of transparency, concerns about performance and availability and so on.

4. Containers will get more popularity: Cloud poses some problems in IT operations but containers have helped solve these issues. Developers are loving containers and now the operations teams also need to containerise different parts of an application, move them to different types of cloud infrastructure and manage them in parts.

5. Price leadership will see the next step: In 2015 a two-tier public cloud structure will take shape and the top tier will be Amazon, Azure, SoftLayer, and Google Compute/App Engines. But low-price, minimalist infrastructure tiers will be more popular among independent developers, startups and small businesses, like Netcraft and DigitalOcean. Lightweight, fast cloud services will be the biggest trend in 2015.

6. Cloud Friendly Decision Frameworks: Cloud computing offers lots of important features and benefits like cost-effective use-based models of IT consumption and service delivery, and it’s believed almost by everyone now. IT can also focus on new service with cloud computing. But success of cloud adoption depends on making the structure optimised according to requirements. First know the concerns and then proper planning, implementation and optimisation of cloud strategy are required.

7. Application Designs should be cloud optimised: Organisations usually transfer their enterprise workloads to the cloud or an application infrastructure. But to explore the full potential of the cloud model, applications should be designed which are cloud optimised.

8. Software-defined security will protect workloads: Software-defined security will become integral part of software-defined data centres and accompany workloads into the cloud. If the network, the storage system, and containers and virtual machines are defined on the host servers in the software, then the security can also be defined. Software mapping systems identify system perimeters and intelligence is fed into a central monitoring system.

…Programmers are always advised to improve skills in C, Java, Objective C, PHP and the similar types. But the exciting part is there are few new languages which are getting introduced slowly with huge potential and entering the mainstream programming too. Some of the languages are evolved from existing languages. The newer languages are helpful towards making applications simpler for programmers.
1. D – D :
This is one of the hottest new programming languages which is used by Facebook. It’s a refreshed version of C++ and it takes its inspiration from Python, Java, Ruby, Eiffel and C#. It’s easy to write a code with D and it also doesn’t require a pre-processor. It can also tackle Unicode excellently. D is growing to expand its reach in coming years a lot with high efficiency and productivity.
2. Dart - Dart:
Dart is created by Google which is expected to become the new language for web programming.
Dart uses C like syntax and keywords and its objects are defined through classes and interfaces. Dart allows programmers to declare variables with static types, though its optional. Dart is not very usable today but it has a strong future. It’s a strong competitor for JavaScript.
3. Ceylon - Ceylon :
The creator of Ceylon programming language, Gavin King, knew it best how to create a language which is better than Java. That’s why King created Ceylon in collaboration with Red Hat. This language is said to have potential to kill Java one day. It works on the Java runtime environment only which means java has a huge role to play in Ceylon. But Ceylon offers regular syntax and developers are allowed to overcome the limitations of Java.
4. Go - Go :
It’s a programming language used for application development to system programming. It’s more similar to C and C++ than Java and C#. It has some modern features too like garbage collection, runtime reflection and more. Go language is an easy option to program with and its basic syntax is like C. The Go team aims to create a dynamic scripting language. The language is still under development and it differs from other languages a lot.
5. F# – F# :
Computer scientists are quite familiar with the concept of functional programming but programming languages like C++ and Java lack from integration of functional-style code into libraries. Here comes F# or F-sharp which is a Microsoft language and both functional as well as practical. It’s based .NET common language runtime.
6. Opa – Opa :
Web development is not a very simple thing to do. Web apps require several coding in multiple languages. HTML and JavaScript are required on the client, Java and PHP are required for server, SQL is required for database and more. Opa is not an alternative to the existing programming languages. It’s a combination of client and server frameworks.
7. Scala - Scala :
Scala is more than often compared to Java. It’s not very new as it’s there for ten years but it’s not considered one of the most essential programming languages. But Scala is said to be very productive by developers as it’s concise and more functional programming style is used in it. It also offers a potent mix of object oriented and functional programming.

Databases are the primary targets of cyber-criminals as most of the valuable and sensitive data are kept on databases only. Hence, database security is a necessity. There have been several incidents when users’ personal data have been compromised through database hacking. Security measures for databases are taken for data protection and these measures don’t allow hackers get access to any document available on online databases. Though several security measures are adopted in databases, but still there are some failures which occur repetitively. These gaps can be present at any development stage, during integration of applications and updating the database system. Here we have listed ten most common vulnerabilities which are found in database driven systems:

1. Failure in Deploying: The biggest weakness lies in a database is carelessness during the deploying process. Search engine optimisation is valued for success of businesses and when database is sorted, SEO can be successfully completed. A functionality test is a must to make sure about the performance level but these tests cannot make sure if the database is doing something which it’s not expected to do. Hence, before deploying the database, its advantages and disadvantages should be thoroughly checked.

2. Broken databases: If there is any bug in the server database software then most of the vulnerable computers are attacked as soon as the database is deployed. These bugs exploit through buffer-overflow vulnerability and these bugs demonstrate the difficulties in security patches and fixes. Due to lack of time and resources, businesses are always not able to maintain regular patches on their systems. That’s the reason why databases are left vulnerable.
3. Excessive permissions: Most of the databases have users who are configured with excessive permissions. User accounts mostly have unnecessary default advantages and excessive access to functionalities.
4. Leaked Data: Network security is mostly not in focus while deploying a database system. Databases are usually thought to be in back office which is mostly kept away from Internet access, and there is no encryption in data communications in databases. But the networking interface of the database should not be ignored. If the network traffic is accessed by any cyber-attacker, then it’s very easy to get access to user data. Transport Layer Security should always be enabled. Network performance is not very affected by Secure Sockets Layer but it makes very difficult to collect any data from the database system.
5. Insider risks: Databases face two kinds of threats including external and internal. There are some people inside an organisation who can steal information for personal profits. This is one of the most common issues in large organisations. In order to encounter this problem, data archives should be encrypted so that insider risk is reduced.
6. Abuse of database features: In last few years, database exploits have been done mostly from misuse of standard database feature. Hackers are able to gain access through legitimate credentials which can be caused through simple flaws. These flaws allow bypassing of the systems. Some unnecessary tools need to be removed to stop or limit abuse of database features. The surface area, which hackers usually study before attacking, should also be shrinked for the purpose.
7. Weak passwords: Users on databases use weak and sometimes default passwords. If systems don’t enforce stronger passwords then databases can easily be compromised. If there are weak passwords, it also proves that other systems inside the network must have weak credentials. These passwords are easily assumed and hacked and attackers get access to the database,
8. SQL Injections: This problem is a major one when it comes to protection of databases. SQL injections attack applications and database administrators clean up all the mess which are created by malware, inserted into the strings. Web facing databases are best secured by enabled firewalls.
9. Sub-standard key management: Key management systems are aimed to keep keys safe but encryption keys are commonly stored on company disk drives. These keys are sometimes believed to be left on the disk which is caused by database failures and if the keys are left in such locations, then databases are left vulnerable to attacks.
10. Database irregularities: The most important thing is lack of consistency in databases, which is both a administrative and database technology problem. System administrators and database developers should maintain consistency in databases, always stay aware of threats and make sure that vulnerabilities are taken care of. Proper documentation and automation are needed to track and make changes so that all information in enterprise databases are secure.

Are you overflooded with ever-increasing amounts of data? Well, this is the biggest headache for IT departments and IT managers are always combating to make the most out of the hardware system, even in limited budget. There are certain ways which can help you to improve the performance of the existing storage hardware. Learn the best eight tools to use in business environments to boost the storage performance:
1. Add Solid State Drives to existing storage:
Storage vendors usually add Solid State Disks or SSDs to the existing enterprise Storage Area Network (SAN). It has certain restrictions in the drives which are deployed with bandwidth limitations. Hard Disk Drives need to be utilised the most and if SSDs are added to the existing architecture, then it needs special caution, as it may cause severe damage to the entire system. Well, this is the simplest option which comes with lowest possible risk factor in tackling bottle-necks. It can also prove beneficial for existing storage management practices. It can also be used in tackling the involved technical challenges.
2. Make use of built-in features:
Storage Quality of Service (QoS) is not a very popular performance booster. It’s used in input-output-intensive virtual machines (VMs) and it don’t impact performance of others. SCSI commands are implemented and not emulated controllers, to boost data storage performance.
3. Throw Hardware at the problem:
Data center infrastructure is a broadly discussed issue. Hardware is sometimes added to solve performance problems and this approach is a time-tested one. It’s easy approach but expensive one. It involves adding more hard disks to the existing storage in order to add IOPS. It’s a legacy way to address the storage issue and it’s one of the most effective ways.
4. Add Flash Cards to servers:
This is one of the simplest ways to address performance challenges. It involves opening servers and adding flash card like PCI Express Cards to the storage design. This is the best approach for single applications and single server environments. But adding flash cards to enterprise storage needs extra caution again. It can sometimes limit the growth factor and functionality of a virtualised solution. It can also limit expansion and restrict direct data and solution growth. But it contains low risk and in this case only the host configuration is modified and not the entire enterprise storage architecture.
5. Place data on the appropriate storage:
Auto-tiering has received a lot of attention and it has become an essential practice in reducing latency. It’s not a brand new technique but now it’s gaining further importance as more people are using flash in their enterprise storage arrays.
6. Boost performance with hybrid storage and flash cache:
If there is a blend of flash and spinning disk in a hybrid storage model then it offers high performance. Flash hybrids are used to determine which written data receives most number of read requests and copy the same data to SSDs, to give quicker access. They also help in transactional applications.
7. Utilizing All-Flash Arrays:
All-flash arrays (AFAs) have flooded into the storage market over last one year. They can provide extremely high input/output rates and they are designed for low latency data delivery. In most cases the project costs minimise the cost justification for standard enterprise class projects. But sometimes it’s the most unsuitable and unbalanced approach as they require extremely high capital and operating expenses. Long-term development plans are affected too.
8. Storage virtualisation technology:
There is always a need of virtualising and pooling storage to improve performance of data storage systems. Storage capacity needs to be aggregated and spread the value-added features of the hardware across the storage. This also creates pools and permits similar results of high-performance storage. Multiple hosts also reduce I/O consumption and it provides features like high-availability storage like load balancing.