Category Archives: Technology

The rapid rate of technology changes today, and the disruptive forces that they unleash are difficult to fathom. It is important for us to be abreast of day to day developments in the world of technology so as to be proactive in meeting the challenges of obsolescence of technologies and methodologies. These days, technology drives business. We want to keep you updated with blogs, articles and news on the trending forces in the tech-driven world.

To Hadoop or Not

Refer “Big Data” or “Data Analytics” today and you would definitely hear of “Hadoop”. Hadoop is often positioned as “The Framework” which would solve all your Big Data needs. This framework from Apache Software Foundation is an Open Source Framework, i.e. anyone can use the framework for free and is capable of handling huge data sets across a large set of commodity hardware. Therefore, it is popular among the development community.

Big Data, from its very nomenclature refers to data which is huge, structured or unstructured. Examples of such data are the enormous amounts of data in social media sites, evolved due to interaction of the fraternity of social media users of Facebook, Twitter, LinkedIn, etc on a day to day basis. Types of such data include the various types of chat information, images, videos, etc which are being used by the users of such applications. Other applications providing such data are from iOT applications like data related to processes in industries or manufacturing units such as temperature, pressure, etc. that keep on changing over real time and result in huge data sets, if we measure the data over certain periods of time. Other such data could be data related to telecom usage, space or weather data, stock trading data, etc.

Hadoop is handy if your needs are that of ETL (Extract – Transform and Load) Operations. However, do not get into the Hadoop trap unless you have a clear understanding of your business needs. Ask yourself the following questions before you decide on investing your time and money implementing on Hadoop.

  • Do you really have terabytes or petabytes of data to be processed? Hadoop was designed to handle huge data volumes of this scale. However, a report from Microsoft states that majority of jobs process less than 100 GB of data. If your data size is lower than terabytes, you may not require a Hadoop. Even if your data size is more than terabytes, do you really need to process all your data?
  • Are your data needs real time? If your expectations of processing data are real time, Hadoop is not the best tool to meet your needs. In fact, Hadoop requires some time to process data and is a very good batch processing tool. In case your business is to interpret data in real time, such as movement of data in the stock markets for taking real time decisions such as buying or selling stocks, Hadoop is not the answer.
  • Do you require a quick response? Your requirements for response time need to be well understood. If your user is not interested to wait for a minute to look at response for large data sets, you may have to use other real time applications and not Hadoop.
  • Does your requirement involve complex and computation intensive algorithms? The alogorithm of MapReduce in Hadoop is efficient in handling processing of large volume of data on a parallel processing mode by dividing large files into smaller files and storing across machines. However, this is not apt for requirements which are computation intensive and having large number of intermediate steps of data during computation, e.g. the computation of Fibonacci Series. A few machine learning algorithms also do not fall in the paradigm of MapReduce and therefore the pertinent question here to decide is whether the business requires high usage of specialised algorithms. In that case, the technology experts need to analyze if the algorithms required are MapReducible.

Hadoop would be your choice when:

  • You want to transform largely unstructured or semi-structured data into usable form.
  • You want to analyze information of a huge set of data to obtain insights, and you have ample time for such analysis.
  • Have overnight batch jobs to process, e.g. daily transactions processing in case of credit card companies
  • When the insight gained from such data analysis is applicable over a longer period of time, e.g. social behavior analysis in case of social sites, e.g likes and affinity analysis, job suggestions based on browsing history, etc.
  • Hadoop utilizes key value pairs during its processing efficiently, and such forms of data are ideally useful for its operations.

So… before you rush in to select Hadoop as your framework, analyze your needs carefully. Though Hadoop is a free framework, its implementation might require effort and cost and the budget for implementation may not be that cheap.

Why Dave, the Salesman Hates the CRM

It is quite common in organizations. A new director joins the sales team, looks for marketing and sales details of performance and cannot find those. The only data that he has is the sales per team member, the targets set and the performance achieved by the sales team. Sales team members work in isolation and fear to share data even with their own team leaders. The organization seems to be “performing”, and in an “auto-driven mode”. Good enough…isn’t it?

Not so for the director, who thinks how he can comprehend the overall performance without “reliable and sufficient data”.  How can he control the process? What is the solution to this?

The Nirvana does not seem to be far off. The director’s ray of hope hinges around a tool based on Customer Relationship Management (CRM).  Customer Relationship Management has a wide connotation and means a set of technologies, practices, and strategies for companies to analyze or manage interactions and data during the entire life cycle of customer. By this process of customer relationship, customer retention is improved ultimately helping sales growth. A CRM package can provide automation in various flavors such as Sales Force Automation, Marketing Automation and Services Automation.  A collaborative CRM ideally can establish integration with external suppliers, distributors and vendors. Besides, it may be highly Analytical, offering data mining, pattern recognition and correlation features.

The director is convinced, and soon a CRM is implemented across the organization. Dave, the Salesman is not convinced with this approach. The sales team is used to manual systems and adopting a CRM results in a set of challenges:

  • The Sales team members usually like interactions with people, rather than with a CRM. Their time is well spent talking to potential customers, rather than entering data.
  • Usage of CRM apparently does not benefit  the Sales team members in achieving their targets. The CRM disrupts their existing modes of working, which, they believe have helped them to be “successful” in achieving targets.
  • Lack of systems / IT knowledge among the sales team members  cause hindrance to adoption.
  • The decision for CRM implementation was taken by Senior Management without involving the rest of business or the users of the system. Stakeholders like Dave are bound to be unhappy as their needs are not completely addressed.
  • Over complicated systems require adequate user training and hand holding, followed by self-practice.

The challenges seem to override the obvious benefits that a CRM brings forth, such as creating more visibility in operations and monitoring, storing, archiving and extracting data securely based on access rights of the users.

How do we ensure that organizations adopt CRM effectively and that the implementation does not result in a misadventure? Here are a few tips from the experts:

  • Convey objectives of using CRM clearly to Sales Team. Business benefits for the organization as a whole, and ease of operations for sales team need to be conveyed clearly.
  • Involve the sales team in designing of the system. This would ensure buy-in of the sales team members for using CRM.
  • Demonstrate Return on Investment (ROI) of such a venture, not only for the organization, but also for the sales team. Communicate the wins due to CRM implementation to the team.
  • Finally, a major reason for the frustration of sales folks is the complexity of CRM. Often, the CRM uses data intensive entry screens, making entry of data mandatory. Lot of this data is not required to be entered. Complexity of data needs to be minimized, making it easy to use.

Therefore, any implementation of CRM would require sufficient planning to ensure that the stakeholders are involved, and their requirements are met. Finally, Dave should like the CRM instead of hating it.

Big Data to Bigger Data, Microprocessor to DNA – A Competition with God

Integrated Circuits (ICs) came into existence during the eighties, when there was a need to pack a large number of transistors, diodes, etc.  into a small chip. The 8085s, and 8086s of the world quickly evolved into different ICs, which became ubiquitous by their presence among a wide gamut of computers and customers. Miniaturization, and addition of more power in a chip has been on the rise since then, as the data needs have been constantly increasing.

The world of Big Data has set goals for the next level of chip evolution.  IC manufacturers have been quite aggressive, and competitive in miniaturization, and processing power.  This journey for chip makers, however, is likely to reach a dead end soon with silicon technologies reaching their limits. This is a potential risk for evolving technologies such as Artificial Intelligence, which would require humongous data, or a BIGGER DATA set than Big Data. So…what is the mitigation for this risk??

Scientists have found a solution for such limitation in God’s own creation – the DNA – the natural supercomputers that exist in millions in human bodies. Deoxyribonucleic Acid (i.e.  DNA), which comprises various genes, performs functions, and calculations much faster than the fastest supercomputers on this earth, as of today.

Microsoft, along with a team of researchers from academics recently used Biotechnology to store information of about 200 Megabytes along with a compressed video in a fraction of a liquid drop.  The binary code of 1s and 0s, used in silicon technology was deftly converted, and mapped to the four bases in DNA – Adenine, Cytosine, Thymine, and Guanine by this team.

DNA storage involves some of the latest technologies in security and data compression. This purports to develop a completely new humanoid like another human being, and therefore, can be used extensively while making chips. The units of coding DNA are very small – less than half a nanometer, whereas the modern day transistors are about 10 nanometer in size. Moreover, DNA can be packed in a three dimensional configuration unlike transistor chips, which are packed in a 2 dimensional mode. This results in high processing power for DNA of more than 1000 times over the Silicon chips.

Therefore, DNA would be of great use in storage technologies. We shall be using God’s own creation to derive unprecedented benefits in data based technologies the near future, and potentially create humanoids more powerful than human beings.

Why Are We Scared of Artificial Intelligence

Artificial Intelligence (AI) is being considered as the next technology evolution, which would disrupt a large chunk of jobs performed by human beings. Learning machines, which are being designed, are likely to learn faster than human beings. Consequently, they could even turn out to be our foes, or pose as survival threats to human beings. Not surprising though, a host of important personalities, such as Bill Gates of Microsoft, Elon Musk of Tesla, and Stephen Hawking, the famous physicist of our times, have warned us about the serious consequences that AI would cause to the human race.

The last few years have witnessed lot of work, and implementation in the automation space, particularly in Artificial Intelligence. Without appropriate regulations in place, there could be valid reasons for such fears to be expressed. Think of a scenario where AI is in wrong hands, and is being used with evil intent!!

The stage of AI research today is far below the level that could cause robots to dominate over human beings. Essentially, AI is based on code, or algorithm written by developers. Therefore, the focus of AI being used is limited and focused to specific jobs in hand. This means, the robots with AI are still stupid. They are totally controlled by scientists who design their intelligence. Usage of Artificial Intelligence today is limited to specific activities, which are repetitive in nature. Robots that are designed based on AI focus on a narrow scope of work- they possess Artificial Narrow Intelligence (ANI). Human intelligence, on the contrary, is manifested in multiple tasks, and encompasses a wide assortment of activities that cannot be easily replicable.

Examples of AI could be a driverless car, or IBM Watson’s Jeopardy supercomputer, which could potentially beat the world chess champion, or self-learning bots. How do you think that these objects work? Simple…. They rely on a large set of data – Big Data, a huge number of “ifs and then” programming, i.e. a large number of algorithms designed by human beings, NLP (Natural Language Processing, for their language skills, if any), etc.. If there is a defect, or bug in programming, the computer has to obey the master and, cannot automatically rectify itself. Therefore, the AI systems are not creative enough to build upon themselves. They still depend on their masters who design, and completely control their operations.

Like any other machine, an AI based machine has its own limitations, and could be dangerous if not controlled. Take for instance the recent “driverless car” from Tesla which met with an accident, killing the onboard human driver. Unless the algorithms are properly written in the AI processors, such costly mistakes are bound to happen.

Human beings are blessed with “common sense” that helps in taking decisions easily, whereas these fifth generation machines lack this human attribute. They are essentially “dumb” outside whatever they have “learnt”. They still have to be monitored, mentored, and controlled by human beings. Hence, fears related to apocalypse are not well founded.

As more automation is built in, and around our systems including those with AI, existing jobs would get replaced, or in some cases, updated to extend the AI based tools.  Therefore, introduction of Artificial Intelligence based robots also could replace, or extend some of these jobs. Nevertheless, because of “common sense”, human beings would be better placed than robots to evolve, and take up new roles, which would add value to our society. Time and again, evolution of various technologies have taught us this lesson, and AI would be no different.

Progressive Web Apps – A Promise for Web Developers

“Disruptive” or “Radical” are some adjectives to describe novel technologies. “Ajax”, with its concept of responsiveness was one which caused transition in web engineering rivalling traditional applications on the web. The other transition which was notable was arrival of native apps in the mobility domain. With Android and iPhone aggressively grasping market shares from the traditional blackberry’s, a new demand was witnessed in the year 2014 – that of native apps which could be developed and stored in Google or App stores and used directly by mobile devices. Mobile websites seem to have lost the battle to these apps, which were capturing the market rapidly.  No longer being profitable, these sites were in the process of closing their shutters. By the end of 2015, millions of apps were developed (1.5 million in App Store and another 1.9 million in Google Play Store).

Are all of these apps of use? The answer is “No”. Majority of these apps today are “zombies” – unused apps consuming stores, which have never been downloaded.  Therefore, the question still remains – whether the promise of native apps is fading and eventually would be lost out to the web?

Native apps were “hot potatoes” to users because of their feeling of ownership, of having apps on the device home screens, fast loading and offline usage. Mobile developers were quick to latch on to this demand and developed countless apps. Web developers, on the other hand, focused on server side technologies, new JavaScript features and components – Node.js, Angular.js, React.js, HTML5, Web API, single page applications etc. The developments in this domain were relatively slow, given the limitations of these technologies.  Things however, started to change with the advent of Progressive Web Apps. The functionalities, which were viewed as key advantages of using apps on smartphone, were progressively available for the web developers to extend to their mobile websites.

Progressive web apps (PWAs) use the same set of technologies that developers usually use – HTML, CSS, and JavaScript. Moreover, they do not require special IDEs for development, such as Android Kit or iOS SDKs, but could be developed using simple editors like Notepad++.  With inherent advantages of these apps, web developers are excited again about the numerous possibilities that this technology would create for them.

Still confused with the idea of PWAs??

Here is a definition from Google – “Progressive Web Apps are experiences that combine the best of the web and the best of apps.” Essentially, these apps tend to create similar user experience for the web user, to those of mobile users using native apps. As the name suggests, these Apps offer a few key features:

  • Progressive, in order to accommodate any user irrespective of their choice of browsers.
  • Responsive to any kind of screen – laptop, desktop, mobile, tablet or any other form of screen.
  • Tolerant to Connectivity issues, i.e. can work offline in networks with poor connectivity, with the help of service workers.
  • Like App, i.e. built with an App Shell model, it offers an app like feel to the user on the web with styles like apps, navigation and integrations.
  • Latest updates using service worker updates.
  • Secured using HTTPS protocol so as to avoid pilfering or corruption of information.
  • Search Engine Friendly, ensuring easy identification and accessibility with Search engines such as Google or Bing, etc.
  • Push notifications to reengage customers lost due to non-availability or lost sessions. The Service worker threads, which remain “alive” beyond the life of a browser session, hold the key to these notifications. These workers make the notifications available during the following sessions to the users.
  • Easy availability of Apps on Home Screen without the need for an App Store.
  • Shareable by URLs – no complex installation of apps required.

With the arena for progressive web apps heating up, there are some interesting uses of this concept. FlipKart claims to have used PWAs to design FlipKart Lite that has improved retention of users by about 70%.  The other site which has received rave reviews for these apps is The Washington Post.

Like any other technology, Progressive Web Apps have their own set of issues. A cause for concern is that the mobile web site is turning more app-like creating a demarcation in look, feel and usage between a mobile website and a standard website. For any App to qualify as a PWA on the Google Chrome browser, the browser looks for a few properties and if satisfied, extends a few abilities like the ability to add to a home screen of a smartphone. PWAs also require to provide manifests for their sites, with the display status – “standalone”, “fullscreen”, and “browser”. The “browser” is the only option where the URL can be seen, and the look is like a web browser. However, Google does not register this mode as a PWA.

Therefore, developers seem to be focusing more on the “app” part rather than the “web”.  The URL, which is an essential ingredient of the web since its inception is no more to be seen in these implementations, essentially stripping the web of its basic feature, i.e the address bar.

As usual, the development is chaotic and messy, but that does not deter the developers from aggressively establishing this as a standard. Expectations are high, and very soon, the effort is likely to bring out the best that the web has to offer, eventually making users even stronger.



Do Not Let Your Data Kill You – The Need for 3 R’s – Reduce, Recycle and Reuse

As the saying goes – anything in excess is a waste. Isn’t it true for information today?  Information or “data” – the four letter word which is more representative of the digital world has overwhelmed you, me and everyone transcending this space. Data in this form has various connotations – the more popular “Big Data”, Large or complex data, humongous data, etc.

On an average, data of companies have been increasing at a rapid pace – about 100% or more every year. Also, with users of social media being overactive, data transactions have multiplied manifold in real time. Though technical advances are being made to store this data in large repositories, there is a need for deriving context – meaningful information so as to Reduce, Recycle and Reuse data. For example, companies would like to use their data to understand and interpret information such as employee interactions, communications and client engagements. Data that is not used, but occupies useful repository space is a costly waste and needs to be eliminated. Regulatory requirements require one to use data to create intelligent and statutory reports that can be audited easily if the need be. The 3 R’s put in practice improve data management in a business environment:

Reduce:  Regulatory requirements for data, e.g. PCI data storage requirements or other Information governance or compliance standards, require one to be circumspect before planning for reduction of data. This challenge for cleaning up data not only results in a large volume of unused data, but also results in saving of data in local repositories of users with subsequent backups by the IT team.

Therefore, how do I reduce unused data? A Document Retention Policy, specifying the criteria for holding or removing data, the process governing such a decision and the relevant owners to implement and oversee is the first proactive step that any company can adopt that only appropriate data is maintained. With a policy in place, the discipline to actually implement such a policy enables a large reduction in unused data.

Recycle:  Regulatory Reporting is an important aspect for many industries. For example, in the US, Health industry related reports are mandatory, not only for the companies, but also for the patients, and the industry is well regulated.  Taxation or Financial obligations also require statutory reporting and audits. It is important for the data to be recycled and processed into useful reports for the auditors and the statutory authorities. Usually, intelligent software, ETL techniques, help in recycling such data.

Reuse: The most interesting part of data management is Reuse of data. The world of Business Analytics and Business Intelligence has offered options for deriving business insights from a large data set and intelligently reuse data. A new science “Data Science” has evolved in its own right and is promptly advocated by the Harvard Business Review. The HBR article from Thomas H Davenport and D J Patil in fact refers the job of a data scientist as the “sexiest job of 21st century”.

A few terms often used for reuse of data are:

  • Data Science: This is a term which loosely entails the combo of computer science, analytics, statistics, and data modeling. While this is a loose combination, and some companies have evolved their own courses or certifications, it still needs to mature as a science with comprehensive tenets and elaborate literature.
  • Smart data: Smart data is usually a subset of Big Data, with noise filtered out. While Big Data can be characterized by its attributes – variety, velocity and volume, a smart data is usually is characterized by velocity and value. Smart data is a key ingredient for intelligent BI Reporting.
  • Predictive Analytics: It involves smart methodologies utilizing data – machine learning techniques and statistical algorithms to predict the future outcomes of data. Companies gain out of predictive analytics by deriving or planning important outcomes from past data, e.g. revenue or profit.
  • Real Time Analytics: Analytics served real time, e.g. stock prices moving up or down, updates on page views, sessions, bounce rates, page navigation, advertisements dynamically adjusted based on type and frequency of customer usage, etc.
  • Intelligent Decision Systems: Use of Artificial intelligence in association with data is an area that helps users to derive the best and optimized decisions based on a large number of input variables. While this is still evolving, it can be used in number of areas such as building marketing systems that offer customers based on profile analysis, blocking of fraudulent transactions in credit card operations, etc.
  • Data Visualization: Pictorial or graphical representation of data intelligently, in an interactive way, help business professionals to identify trends and patterns in their data, e.g. sales data region-wise, or by customer profile.
  • Big Data Analytics: Reuse of data is not complete unless we use the term Big Data. The concept of Big data analytics has evolved from companies managing huge sets of data such as oil companies or telecommunication companies to social media such as Facebook, Twitter, LinkedIn that involve large data sets. This form of analytics help us to derive hidden patterns, market trends, preferences of customers, unknown correlations, etc.

 Business Data Analytics, therefore is in its infancy, to be nurtured, developed and evolved over the years. The attraction therefore is immense, and so is the job of the Data Scientist!!!

Traffic Engineering & Its Scope- A Review

Traffic engineering is the modern branch of civil engineering, which primarily encompasses research, planning and geometric construction of roads, highways, railways, bridges, traffic lights and traffic signs. According to WHO, nearly 1.25 million people die every year due to fatal road accidents, and 20 million to 50 million persons are maimed. With the number of two wheelers and four wheelers increasing manifold every year, road accidents are expected to become the 7th major cause of death by 2030.

Also, there has been an alarming rise in railway accidents in the last 5 years. More than 80 percent of them are attributed to derailments, and human failures. Mishaps like bridges collapsing are also on the rise, which result in heavy casualties. It is, therefore, imperative in public interest for every government to pay special attention to traffic engineering.

Why is traffic planning essential?

Many people might thinking that disasters like road traffic crashes, and train collisions could be averted by installing more traffic signals at the right places, and by reducing speed limits. These measures, however, are not completely effective in reducing the instances of accidents. This is because of the fast-paced increase in the number of vehicles over the last few decades. Moreover, researches have revealed that excessive traffic regulations lead to a rise in traffic hazards. It must be noted here that slow speed does not necessarily associate with increased safety. People lose their cool amidst the honking vehicles and heat. These are the times when many people resort to jump signals, and flout the traffic rules.

These days, traffic engineers analyse speed data, accident statistics, traffic count and human psychology while laying out plans for an infrastructure. Safety is linked to smooth traffic operation, and any kind of disruption results in fatalities. In fact, traffic engineers are somewhat like medical professionals, who aim at doing away with all kinds of unwarranted impediments for traffic movement, thereby enhancing safety and smooth movement.

Traffic engineering promotes uniformity in traffic rules. It becomes easy for the traffic police to administer, and function while on duty. Commuters need not indulge in arguments with the traffic cops, since interpretation of rules is the same for everyone. Besides, traffic engineering lays guidelines for the public highway officials in administration, manufacture, installation and maintenance. Consequently, there is compatibility between commuters and administrators.

Importance of Traffic Engineering:

  • Traffic Control: Geometric design is an important aspect of traffic engineering, which is done on the basis of speed study, traffic volume study, and traffic flow study. Accordingly, pavement designs, road structures, and regulatory measures are determined. Automatic counters are installed to count the number of vehicles crossing a particular section of a road throughout the day. Spot speed study, and spot speed data also help the road planners in framing rules for traffic movement.
  • Smooth Movement: Researches have revealed that traffic engineering enhances smooth movement of traffic, irrespective of its volume. There has been a significant increase in the number of vehicles over the last few years, and yet, the instances of unnecessary traffic jams have reduced. On the contrary, traffic movement has become smooth, since most of the bottlenecks have been done away with. Hence, density of traffic does not affect speed much.
  • Introduction of Dynamic Elements: Instead of constructing additional infrastructure, dynamic elements are used in traffic management. Sensors are interconnected to guidance systems, which are used for monitoring traffic movement.
  • Smooth Movement of Traffic during the Rush Hours and Bad Weather: Rush hours pose severe challenges to the daily commuters. Also, bad weather jeopardizes traffic movement. Traffic engineers, however, have been assiduously working to do away with such impediments so as enhance speed, and public safety.


  • Huge Investment: Traffic engineering, however, demands huge investment, which inevitably drains the public exchequer. In developing countries like India, where a large population reels under poverty, it becomes imperative for the government to spend money on employment generation schemes, food subsidy, free education, and other development plans.

Besides, the government needs to maintain a stockpile of artillery, and firearms to safeguard the nation against external aggression. All these require huge funds, and it’s the tax payers who end up bearing the burden. Under such circumstances, mammoth schemes of traffic engineering tend to crush the middle class tax payers.

Traffic engineering demands huge upfront costs. Also, the process of training the city staff turns out to be expensive, and time consuming.

  • Failure of Signals: Although traffic engineering tends to sustain smooth flow of voluminous traffic, it fails to do so beyond a certain limit, consequently leading to logjams.

Also, there are instances when some signals fail, or stop functioning. This results in unnecessary commotion and chaos.

  • Problem for Emergency Vehicles: Emergency vehicles like ambulances, and fire brigades need to reach their destinations on time; however, they get held up in bee-lines of vehicles. Traffic engineers need to design plans that would facilitate unrestricted passage for such vehicles. There should be separate corridors along the roads, and highways so as to augment smooth transit for emergency vehicles. It would be a herculean task for the traffic cops to keep away the other vehicles from entering these corridors. This is because the other commuters would be tempted to use these lanes during the rush hours, and flout the norms.
  • Decline in the Number of Accidents: Traffic engineering has attained great milestones, notwithstanding certain defalcations. There has been a decline in the number of accident cases in the recent years. Traffic engineering has introduced countermeasures to reduce the number of hit and run cases in which, most of the victims are pedestrians. Also, the number of traffic crashes, and fatal deaths have considerably gone down.
  • Part of Smart City Projects: While our futuristic governments have great plans for setting up smart cities, traffic engineering would be instrumental in fomenting their grand schemes.


Intel’s Skylake Processor Release

It was just months ago in January 2015 that we were talking about the Fifth Generation release from Intel code named Broadwell processors for desktops, laptops, tablets and hybrids. The processor offered modest performance boosts and increased cell life. However, the next version of the Microprocessor was already waiting in the wings to be released soon, as there was considerable delay in release of Broadwell chip.

So, the Intel Skylake Processor was released at the Gamescom trade show in August 5, 2015, just about 6 months after the last release. It was released as two variants – Intel Core i7-6700K and Intel Core i5-6600K.  Intel Core i7-6700K is  the flagship processor at 4 GHz clock speed, quad-core hyper threaded ( meaning eight virtual cores), whereas Intel Core i5-6600K is a 3.5 Ghz processor, quad core, non-hyper threaded. There is also a provision for overclocking these processors.

The timing for the release was excellent as it coincided with Windows 10 Operating System release by Microsoft.  The release was the “tock” in the typical “tick-tock” release paradigm of Intel, with “tick” being the release of Broadwell processor.

Further, at the recently held Intel Development Forum (IDF-15) held in San Francisco between 18 August and 20-August 2015, Intel shared some of the architectural details for Skylake, the 6th generation Intel Core i7-6700k processor, which were missing from the initial launch.

The Skylake processor is the first Graphical Processor Unit (GPU) from Intel with DirectX12 support.  A GPU, also called a Visual Processor unit (VPU) excels in providing a highly parallel structure of information processing making it much more effective than CPUs, and therefore is very efficient in manipulating computer graphics and image processing.  DirectX12 is the latest version of Microsoft’s DirectX API and will be available on Windows 10 computers, tablets, mobile phones and Xbox. DirectX encapsulates hardware details from developers by means of these APIs and also helps Computer Games to be played much faster.  Providing for DirectX12 Support would help Skylake Processor to work very well on Windows 10 based systems.

An important feature of Skylake processor is its support for Rezence charging by wireless mode in laptops and all PC manufacturers have agreed to use this mode.  A move towards “wire-free” computing seems to be evident as many Skylake systems support wireless charging solutions. Intel is also reported to be working on having chains of hotels, automakers and coffee shops to install the charging stations and wireless systems will be in vogue.

We do not have detailed information from Intel on what the part numbers on future Skylake GPUs will be, or the correspondence between the cores and the chips, we may like to infer some relationships. What we understand is Intel has launched SKUs which are high-end of desktop type, which Intel calls GT2 configuration.  The next two configurations, GT3 and GT3e (with EDRAM) are meant for special desktop processors or mobile chips. GT1 configurations, on the other hand map to lower-end chips in the Pentium and Celeron families. Skylake’s graphics core consists of slices, sub-slices, and Execution Units just like Intel’s previous GPUs. A slice consists of 3 sub-slices, and further, each sub-slice contains 8 execution units (EUs). So, there are 24 EUs per slice. A GPU configuration consists of three slices which mean that there are 72 EUs in a GPU configuration.

Skylake processors as compared to Broadwell processors are expected to operate at lower power consumption, provide a 10%-20% CPU performance boost in single and multi-threaded applications, and 30% faster Intel HD integrated graphics performance.  The battery life is also expected to be 30% more because of improved energy efficiency.

The Skylake is available in 4 families like other processors and each of these series is expected to show performance improvement. Preliminary data suggests following percentage improvements in MacBook models as shown below:

– Y-Series (MacBook): This is targeted for the mobile segment and would provide at the most a 17% faster CPU, a maximum of  41% faster Intel HD graphics, with a peak of 1.4 hours longer battery life
– U-Series (MacBook Air): This is targeted for the mobile segment and would provide at the most a 10% faster CPU, a maximum of 34% faster Intel HD graphics, with a peak of 1.4 hours longer battery life
– H-Series (MacBook Pro): This is meant for the mobile segment and would provide at the most 11% faster CPU, a maximum of 16% faster Intel HD graphics, with a peak of 80% lower silicon power
– S-Series (iMac): This is meant for desktops and shows up to 11% faster CPU, a maximum of 28% faster Intel HD graphics, with 22% lower TDP (thermal design power)

The other excellent feature that Skylake provides is that it comes with a built-in Digital Signal Processor (DSP). This allows one to turn on and control PC by one’s voice. So, does that mean that we will not need a Power switch in our PCs with this chip going forward??

With all the exceptional features in Skylake, Intel has taken a great leap in processor technology ahead of its rivals, but it cannot be complacent. AMD is expected to launch new range – seventh generation A-series chips based on AMD’s Zen architecture in 2016.  AMD expects that this range will restore lot of its lost glory.  Intel cannot take AMD lightly and needs to have a roadmap to counter any surprises that AMD poses in 2016.