7 familiar Microsoft technologies you didn’t know Microsoft bought

Not many people are aware that some of the familiar technologies of Microsoft were actually bought by the organization. Check out seven of the familiar technologies that Microsoft has indeed bought.

The acquisition strategy of each and every company varies. Microsoft perform strategic, small acquisitions of companies that provide them what comes down to a piece or a component of the bigger puzzle. When Microsoft makes the strategic buys, it has handsomely paid off. While many may not realize it, some of the most-used products of Microsoft were bought out.


Microsoft is an US public multinational corporation, which is headquartered in Redmond, Washington. The company develops, licenses, manufactures and supports a range of products as well as services that are predominantly related to computing via its different product divisions. Microsoft was established on April 4, 1975 to create and sell BASIC interpreters for the Altair 8800. It grew to dominate the home computer systems. Furthermore, it also dominated the office suite market with Microsoft Office. Over the years, there are numerous technologies that people did not realize that Microsoft bought. Some of these familiar technologies include the following.

Microsoft Technologies


  1. MS DOS. When Intel introduced the 8086 processor, IBM planned don making a computer for it and contacted the creator of the CP/M OS originally, Gary Kidall. Kidall’s wife refused to sign an NDA with IBM to talk about a licensing deal, thus IBM turned to Microsoft. In turn, Microsoft bought the rights to the Quick-and-Dirty operating system, or the Q-Dos that was very CP/M-like. From Seattle Computer Products, Microsoft did not tell SCP regarding the IBM deal and has managed to convince IBM to allow it to market MS-DOS separate from the personal computer.
  2. FRONT PAGE. Nowadays, most people throw around HTML tags as if it’s nothing. However, back in the early days of the web, coding an HTML page was about as acrane to most people as C Programming. That is why, to get a jump on things, Microsoft bought FrontPage and the developer, Vermeer Technologies in the year 1996. Initially, it was bundled with Windows NT 4.0 Server that came with a web server named Internet Information Services 2.0-. Later, it would be added to the Microsoft Office lineup with Office 97 and renamed subsequently FrontPage Server Extensions. Microsoft stopped it in December 2006 but the replacement, Microsoft Expression was released a year earlier and was making headway as the preferred Web page editor.
  3. POWERPOINT. Designed originally for Macintosh under the name ‘Presenter; the software was renamed to PowerPoint because of a trademark issue. The developer, Robert Gaskins managed to release the three versions in short order before Microsoft called in August 1987 and bought it for $14 million. PowerPoint became the Graphics Business Unit of Microsoft and eventually became an integral part of Microsoft Office that first appeared in 1992.
  4. HOTMAIL. The purchase of Hotmail is another example of Microsoft jumpstarting its internet business. It was then rendered as HoTMail in 1997 and was bought for $400 million. It was one of the first web-based mail services, together with Yahoo Mail, which also was an acquisition. Microsoft renamed the service MSN Hotmail. It was surpassed by Google Mail and after falling to misuse and decay, Hotmail became a home for spammers in 2011. Microsoft whipped the service into shape, but the reputation was tarnished at that point. Additionally, Microsoft had a second mail service, Windows Live Mail, thus it merged Live Mail and Hotmail into a single new brand, Outlook.com.
  5. VISIO. Visio showcases how life could be for an entrepreneur in the technology industry. During 1984, Jeremy Jaech and Dave Walter co-founded one of the first desktop publishing companies Aldus, inspired by Macintosh. Aldu’s PageMaker utilized Adobe PostScript technology and Adobe finally decided it required a DTP package and purchased Aldus in 1994. Before that in 1992, Jaech and Walter begun a new company that’s called Visio. Again, the concentrated on visual design on the screen in this case. Visio did flow visual diagrams, charting and layout designs. This time, Microsoft came to call and paid $1.5 billion in stock for Visio in the year 2001. Since then, Visio has become a part of Office.
  6. INTERNET EXPLORER. Initially, Microsoft had blown off the internet revolution during the early 90’s before Bill Gates came to his senses. After falling way behind in the browser competition, the company jumpstarted its efforts with the NCSA’s Mosaic browser license from Spyglass. The company got out of the NCSA to sell licenses for Mosaic. It turned out to be a deal that’s tangled, which cost Microsoft $10 million and providing Spyglass a look at the IE code for purposes of audit, which ended with IE7.
  7. GIANT ANTISPYWARE. Once upon a time, GIANT Antispyware was a competitor of Webroot in eliminating the new spyware software category, which, unlike viruses, hid on the computer and look for particular keystrokes, like the login and password to the online bank account. In 2004, Microsoft bought GIANT and renamed it Microsoft AntiSpyware before renaming to Windows Defender, a year later. Eventually, it would be superseded by Microsoft Security Essentials that broadened the coverage level and protection beyond simply spyware.

Microsoft has indeed stepped up its shopping habits in 2014. Through the years and at present, it does not show signs of slowing down.

Read more: http://www.infoworld.com/article/2606625/applications/116047-Microsoft-products-you-may-not-know-were-acquisitions.html

Indian talent to shape a blend of Information Technology to shape India’s future

For many years now, the advantages of Information Technology could not be denied. IT in India is an industry that consists of two main components, IT services and business process outsourcing or BPO. Narendra Modi, India’s prime minister started a project ‘Digital India’ to help in securing IT a spot inside and outside of the country. It has become necessary to secure the country’s IT industry with all the other emerging outsourcing destinations that have come up recently. The government of India is known to have full support of its IT industry, thus making India the outsourcing hub in the world.


Prime Minister Modi said that a blend of IT and Indian talent will help save the future of the country. At an event to launch the integrated case management information system, Modi addressed the people present, saying that the formula is IT plus IT is equivalent to IT plus Indian talent sums up to India tomorrow. This, he said would help India stay ahead of the others. Furthermore, the Prime Minister stated that India’s future was in the utilization of the latest technologies. With different types of information technology shaping up, it has become a priority for the IT industry of the country to shape up and stay on a competitive edge.

The Prime Minister also used the favorite tagline of the government, ‘desh badal raha hai’ or ‘the country is changing’ to nail the point of changing the mindsets when it comes to work culture. Modi said: Desh badal raha hai. Aaj chutti hai, hum log kaam kar rahe hain. In English, it means the country is changing. Now is a holiday, Buddha Purima, but we’re still working. The statement amused the audience. JS Khehar, India’s Chief Justice and the senior judge of the apex court and Union Law Minister Ravi Shankar Prasad shared the stage with Modi during the event. Modi released a song that goes: My country is changing, it is moving forward, to mark the second anniversary of his administration.

The IT industry of India was born in 1967 in Mumbai with the Tata Group in partnership with Burroughs. SEEPZ, the first software export zone was established in Mumbai back in 1973. Over 80 percent of the software exports of the country were from SEEPZ during the 80’s. The economy of India has undergone major economic reforms in 1991, which led to a new era of international economic integration and globalization, as well as annual economic growth of more than six percent from 1993-2002. In part, the economic reforms were driven by an impressive internet use in the country.

The country remains as the favorite back-office of the world. This is due to its ‘first-mover advantage’ and its deep skill base. It is the all-around standout, providing manpower for any kind of offshoring project. With its deep skill base and first-mover advantage, India still maintains its lion’s share of the IT market.

India has excellence in Information Technology. The government has full support of its IT industry. The country is the popular destination when it comes to BPO because of the big annual output of qualified graduates from elite educational institutions in the country. Furthermore, the English language capabilities of the country’s population make it the number one choice in outsourcing. Indian’s IT services stalwarts are moving up the value chain. Companies like Wipro and Infosys are developed their R&D capabilities, expanding well beyond traditional vendor roles.

As the outsourcing destination in the world, India stays unmatched in terms of vast pool of talent and skilled human resources. It has a population of more than 1.2 billion and about 3.1 million graduates added to the workforce every year. Also, the country holds the distinction as the largest English speaking country in the world, even bigger than the United States and the United Kingdom combined. Aside from the huge numbers, the quality of talents draws organizations who wish to outsource. India’s attraction wasn’t just in numbers but the quality of talent more importantly. Indian service providers are creative and innovative and could truly contribute to the business. Perhaps the number one reason why the country remains the top outsourcing destination in the world is due to the considerable cost savings that businesses and organizations, wherever they may be located in the world, could achieve. This is mainly due to the huge gap existing between personnel expenses in India to that of developed countries. For instance, a good developer in the United States could cost from $50 to $80 per hour for a full time staff, depending on the experience and skills. The hourly cost of a developer in India on the other hand could be negotiated to as low as $15 an hour. The pricing flexibility provides organizations the freedom and creativity in the management of their budget. Furthermore, it also helps companies to reap big profits.


Open source development tools and boards that democratizing the IoT design

Open source development boards and communities democratize the IoT design. The IoT or the Internet of Things is at the core of what the World Economic Forum identified as the 4th Industrial Revolution. It’s a technical, economic and cultural transformation, which combines the digital, physical and biological worlds. IoT is driven by technologies such as big data, ubiquitous connectivity, analytics and the cloud.

Open source development tools


Democratization of computing may sound a nerdy rallying cry. However, it is a revolution that has been brewing for quite some time. Through breaching the walls that traditionally have divided software and hardware disciplines, the democratization movement recently has picked up speed with the advent of new technologies and open source software design tools and forging fresh communities around one of the most talked about technical trends recently, which is the IoT. The Internet of Things is poi8sed to bring bigger opportunity to tech audiences and its workflows would have to be streamlined and make better use of modularized components for addressing truly real technical challenges that the Internet of Things design entails, from designing software, sensor networking to on-boarding cloud.

In the not-so-far future, the IoT’s network of smart devices and sensors would connect virtually each and every aspect of the environment in homes, industrial workflows, physical objects, communication systems, transportation, clothing and the human body, allowing them to exchange and connect data. Furthermore, IoT also has the potential to democratize software and hardware engineering with open source development software, bringing a new generation of innovators and developers in the technology industry. It is predicted that towards the end of this year, fifty percent of IoT solutions would come from startups that are no more than three years old. Nonetheless, if the IoT will propel democratization of engineering, it has to be much simpler to design since IoT solutions are notoriously challenging and complex to crate because building a complete end-to-end IoT platform requires multiple technical domains knowledge, which include skills for managing and integrating sensors, processors, power management, security, embedded software, data analytics and cloud platforms among others. Thus, creating IoT platforms from scratch typically is not the best approach for most innovators and developers. This is where open source development boards could help.

Entrepreneurs, startups and makers could leverage open-source, low-cost electronics platforms and SBCs from suppliers like the Newark element 14 for powerful design tools to more designers faster, which make IoT technologies more accessible and available to all types of innovators. Also, the development platforms help bridge the technical gap between coders and engineers. Supplier networks provide a community for technology support and advice. There are specialized IoT development kits that provide the software, hardware, firmware and integration tools to hasten the design time. Most importantly, the IoT kits usually include cloud access solutions and sensor deployment components to ease the burden of integrating the technologies to functional designs. Development kits are rapidly becoming the equal of reference designs for the Internet of Things infrastructure as suppliers start evolving the out-of-the-box experience to include not only the base platform but the necessary cloud access and sensor too.

The IoT development democratization is becoming a reality quickly. Open source development kits as well as other prototyping platforms offer the basic in leveling the supply chain of the Internet of Things, innovation accelerating and empowering anyone who has ambition and great ideas in terms of designing and creating successful solutions. The growth of shared open source expertise, development kits, small batch manufacturing and the Internet of Things go hand in hand, facilitating fast development and prototyping for a dynamic, huge and fast-growing market. These days, companies in the IoT environment, such as Google, Samsung, ARM and Huawei embrace the open source model via exposing their projects to the developer community and inviting them to contribute in creating a world that’s technologically robust and reliable.

Some of the open source systems that are democratizing IoT include Brilo and Contiki and Riot to name a few. Brilo is from Google, and Android-based operating system intended for embedded devices. It could run on low-end/constrained devices with at least 128MB of ROM as well as 32MB of RAM. It supports intercommunication technologies like Wi-Fi, Thread and Bluetooth. Contiki is created in 2002 by Adam Dunkels and at present has developers all throughout the world. The open source software is released under a BSD license. It has a built-in Internet Protocol suite, provides multitasking and could comfortably work on constrained devices with 30KB of ROM and 30KB of RAM. RIOT is an IoT operating system that has real-time capabilities. It was developed first by a consortium of universities in France and Germany that include the French Institute for Research in Computer Science and Automation, the Free University of Berlin and Hamburg University of Applied Sciences. It’s released under the GNU Lesser General Public License. It could run on constrained devices with a minimum of 1.5KB of RAM and 5KB of ROM.

Open source is not only a business model or a software development model, but is also a huge opportunity for professional coders and hobby programmers alike to touch billions of lives and change lives forever.

Watch out for the Multi-User support will be coming to Google Home anytime now

Google, originally known as BackRub is a search engine that begun development in 19966 by Larry Page and Sergey Brin as a research project at Stanford University to search for files on the web. Later, Larry and Sergey decided the name of the search engine has to change and decided on Google, inspired from the googol term. Later, the domain was registered on September 15, 1997 and the company integrated in 1998 on September 4. What makes Google stand out from the competition and helps it to continue to grow and the number one search engine is the PageRank technique, which sorts results. It also integrates a lot of its services, like Google Local and Google Maps to offer more relevant search results.

The Google Home is Google’s answer to Amazon’s Alexa. Furthermore, with the current news, it’s likely to get a multi-user support anytime now, if the messaging in the Google Home companion app device is any indication.


With Google Chrome, multiple users could now be supported. Now, everyone in the home could acquire a personalized experience from the Assistant on Google Home. Google first introduced the Google Home device during its yearly Google I/O developer conference last summer. During the time, it hinted at its plans to eventually add multi-user support to enable numerous people to access the individual calendars, acquire information on their commute and more via the device. The company is expected to provide consumers and developers an update on its smart speaker plans during this year’s Google I/O scheduled in the middle of May.

Furthermore, an update to Google Assistant today also means that shopping lists would no longer be saved on Google Keep. Instead, it would port over to Google Express or Google Home applications. The move aims for consolidating shopping-related voice commands into one place as well as sets Express up to more widely offer order as well as delivery functionality through the Home. Google added the functionality earlier this year, with over 50 Google Express retail partners, which include Staples, Costco, Walgreens and Google Store of course. Google Express typically requires a membership fee or a per-delivery charge. However, it is waiving the costs until the 30th of this month for purchases made via the Home. For those who have a running list on Google Keep, it’s necessary to see the contents ported over to Express of the Home application starting now.

Google Home

The multi-user support is a feature that users have long desired. Until now, Google Home can be linkie4d to just one specific user’s data, music and other accounts. Recently however, a card was spotte4d on the Home application, which suggests the connected speaker would support various users soon. The message that could be seen in the ‘Discover’ tab of the application tells users that they and others in the home could now get a personalized experience from Assistant on the Google Home. The feature is the most recent sign by the search giant, which signals that multi-user support possibly is on the way and would come soon. A tear-down of the code of the application previously has disclosed that the feature was in the works. Also, executives of the giant company have confirmed that the feature would be coming soon to Google Home.

The support website for Google Home declares that it is not launched officially yet. The announcement is to inform that it is lets users know about the next updated that the company would release that will allow multiple users. Moreover, the company states that users would get notifications and the latest app updated as soon as it’s released. At present, the Google Account settings page states that the device only support one account. The setting page reads that ‘the default Google Account is the account you used to set up Google Home. Currently, Home supports only one Google Account per Google Home device’. The word ‘currently’ indicates that soon this could change.

For home assistants such as Google Home and Amazon Alexa, multi-user functionality is very relevant, as these services could tap into all types of personal accounts, which include email, music and more. Just imagine each person in the home having a separate Google Home, which does not make sense at all. Devices such as the Google Home are created to be used by numerous people, rather than one single person. Nonetheless, things could get a bit complicated, since users have different needs and preferences. Google believes that adding a multi-user support can address these concerns. By adding the feature, Google wants to build a service that’s more personalized as well as caters to more of people’s preferences and needs. Hopefully, the new feature would also mean that the Assistant on Google Home is catching up to the one on mobile and will probably start to give options that previously were inaccessible because of privacy or security concerns, like sending messages, making calendar and setting personal reminders.

Visual Studio 2017 of Microsoft is refining the fundamentals with new features

Microsoft Visual Studio 2017 refines the fundamentals with new features. It’s now a full 20-years since the Visual Studio software development product was first offered by Microsoft to the market. Still then, with Visual Studio 2017 new features and release, the focus is very much on how the company could provide refinements that are productivity-focused for programmers who would use it to build the applications that are used every day. The new release is an investment in various key areas, refining the fundamentals.

Now, with all the new features, one could build smarter applications fast. The new ones, like live dependency validation, help drive DevOps earlier in the development phase. Additionally, improvements to popular features, such as IntelliSense, code navigation, code fixes and refactoring save both effort and time, whatever the platform or language may be.

Microsoft Visual Studio 2017

What fundamental refinements could Microsoft make in the engine room? The Visual Studio 2017 download comes in the shape of enhancements to code navigation, which help developers around their own code structures and hopefully, create cleaner programs that work better. Check out several new features, such as the following.

  1. Perceive and fix bugs sooner. The whole test and debugging experience has been improved to help find and address issues earlier. Features, such as Exception Helpers, Live Unit Testing and Run to Click tighten the DevOps loop through minimizing risks in regression and exposing the root cause of new bugs right away.
  2. Cloud integration. It’s easier than ever to create and deploy services and apps to Microsoft Azure or cloud directly from the IDE. Built-in tools give comprehensive integration with all Azure, .NET Core apps, Docker containers, services and many more. The experience is so simple that one feels like working from inside the Azure datacenter. There are also new features that enable development teams to adopt modern DevOps practices easily and coordinate to react to changes in the market faster and continuously.
  3. Efficient collaboration. Mange team projects directly hosted by any provider, which include Visual Studio Team Services, GitHub or Team Foundation Server. Furthermore, one could also opt to use the new Open Any Folder feature to open and work with virtually any code file immediately with no need for a formal solution or project around it.
  4. Deliver quality mobile applications. The advanced debugging system of Xamarin, unite test generation features and profiling tools, it’s easier and faster than ever to create, connect and tune native mobile applications for iOS, Android and Windows. Moreover, one could choose to develop mobile applications with Apache Cordova or create C++ cross platform libraries.
  5. Level up the language. Visual Studio continues investing in support for the current programming language features. Whether working with Visual Basic, C#, TypeScript, C++, F# or even third-party languages such as JavaScript. There is a first-class feature support across the whole development experience. Furthermore, in this regard, the company states that it has focused on sharpening code fixes, debugging and more, all stuff that’s particularly designed to help the app work properly and not to get overloaded, the security compromised or simply just crash.
  6. Polished VS 2017’s refactoring capabilities. Simply put, refactoring involves code changing in a non-functional manner so it could be more easily read and could be laid down with less complexity for future generations.
  7. IntelliSense technology. Microsoft makes much a lot of using its very own IntelliSense technology in the new 2017 version. IntelliSense is a technology, which in some ways works much like a text auto-complete function, which users could experience in a word processor or inside a smart phone application, while typing. IntelliSense is code-completion in a context-aware sense so it could hasten some coding activity as well as minimize typos and other common errors.

The DevOps notion has been widely discussed already. It’s the coming together of the Developer and Operations teams in name and in function. DevOps aims to stem the approach with a wholesomeness that is rarely found outside the spiritual self-help workgroup meetings, etc., and brings together the entire software app lifecycle. Ultimately, the result of the new Visual Studio 2017 is more automation intelligence. Also, Visual Studio for Mac Preview 4 forms part of the news this month in this scenario. Again, Microsoft aims to spread itself cross-platform to attract programmers of all sizes, shapes and preferences towards the fold to utilize its wider tool sets as well as cloud services.

Most of what Microsoft has done in refining the fundamentals has been focused on building the term that people use so much now when it comes to software ‘automation intelligence’, and more the ability to make things occur in an automated automatic format, which is based on defined needs and procedures that blocks of code, systems, apps and/or the other programmatic elements which connect to them require. In summary, Visual Studio 2017 is quite a refined refinement, fundamentally.

.NET Core, the future of Microsoft’s development platform

The future looks very bright for a .NET developer with the release of .NET Core. Being open source, .NET Core makes it possible to run on any platform.

A lot is happening in the Microsoft environment and the future looks very exciting for a .NET developer because of the .NET Core and ASP.NET Core introduction, which make it possible to run the framework on any platform. Times are also changing within Microsoft. They built the NET foundation and open sourced numerous projects.

.NET Core, the future of Microsoft’s development platform


NET Core is the new cross-platform implementation of Microsoft’s .NET. While .NET is not disappearing, .NET core is set to be future focus of development platform of Microsoft. Born upon the desire and need to have a modern runtime that’ s modular and whose libraries and features could be ‘cherry-picked’, the execution of .NET Core engine also is open source on GitHub.


There are three groups of developers or programmers that should evaluate how they work with .NET. These include the existing developers happy with their apps as-is, existing ones who want to move their apps to cross-platform and developers who are new to .NET. For existing developers of .NET who have been working with the framework for the past fifteen years, a switch to the .NET Core platform means allocating a huge number of familiar APIs as well as entire frameworks like WCF and WPF.

WCF is the best technology for building and consuming SOAP services, thus it’s a shame that it currently could be used only on Windows. WPF is used for building Windows desktop apps so that is not a surprise. Microsoft hinted that WFC could be included in future .NET Core versions. If existing developers have to continue using technologies like WVD, then they must be aware that they should continue to use Windows as host operating system. For those who are new to .NET, the best thing to do is to learn .NET Core first, evaluate it and determine if it could provide the platform one needs. If it could not, then it is best to check out the .NET framework.

Even though .NET Core has not fully matured yet, it holds plenty of exciting features. First, it enables a developer to run apps on Mac and Linux. It is modular, meaning that it is stripped for most libraries and frameworks.


.NET Core 1.0 do not support the making of desktop apps for Windows and other platforms like Linux. For a cross-platform development, it only supports web apps and services through ASP.NET Core as well as command line apps. Moreover, .NET Core also supports Universal Windows Platform applications which are cross-device but are limited to Microsoft Windows 10 platforms that include HoloLens and Xbox One. For a cross-platform development, consider using Xamarin. It is based on the Mono project and not on .NET Core. It’s likely that Microsoft would slowly merge Xamarin and .NET Core so that in a few years’ time it would have a single .NET Core 2.0 supporting cross-platform mobile and web development.


.NET is not going away, but now it’s apparent that the focus of the company is on .NET Core as the future of its development platform. .NET goes back a long way, which was first announced in Europe in 2000. During that time, it was seen as a defensive move against Java by Sun, which threatened Windows due to its cross-platform capability along with a programming language that was more productive and easier than C or C++.


The following are the key foundational improvements with .NET Core.

  1. Build and run cross-platform ASP.NET applications on Mac, Windows and Linux.
  2. New tooling which simplifies modern web development.
  3. Single aligned web stack for Web APIs and Web UI.
  4. Built on .NET Core that supports genuine side-by-side application versioning.
  5. Built-in support for dependency injection.
  6. Environment-based configuration that is cloud-ready.
  7. Ability of hosting on IIS or self-host in one’s own process.
  8. Tag Helpers that makes Razor markup much natural with HTML.

The mature and enhanced functionalities would feel very familiar and help with modern website development.

The world is moving towards a future that is more platform independent and it has never been more exciting for .NET developers than it is today. These days, .NET apps could be developed directly from OS X, with no need to open up Parallels. Moving forward, .NET Core, NET Framework and Xamarin are all integral products that would continue to evolve for Windows, cross-platform cloud and cross-platform mobile.

.NET and ASP.NET would continue to be relevant for existing workloads. .NET     Core applications would share new common capabilities in the future.

Mobile will play a key role in IoT rollout: Forrester Research

Forrester research revealed that mobile will not fade in the years to come, but instead would play a key role in IoT rollout. It could activate adjacent technologies to enable new brand experiences.

According to a Forrester Research, the end of the smart phone era is not nowhere close. However, we might see a greater integration between mobile platforms and others in the future. In a new report by Forrester Research, it indicated that the IoT or the Internet of Things would bring new business and consumer opportunities in the years to come, as billions of connected devices are being added. Yet, mobile would remain dominant where people connect things together.


Mobile would play a key role in IoT rollout by activating adjacent technologies to allow new brand experiences. IoT enthusiasm is at almost peak levels, but organizations appear to be very careful with new technology, which could mean a long wait until the consumer and the industrial market is flooded with users and devices. IoT is all about devices and how people interact with them and the data that they create. One major piece of the IoT is that’s often overlooked is the mobile device. With the release of Apple’s iOS 8, there is no better of pointing out that mobile devices are the gateways to the Internet of Things.

Mobile devices stay with users as they go from home, to the car, office and beyond. The mobile device could bridge the gap between disconnected parts of people’s lives or locations and help in building a bigger context for environments. Sensors that are embedded getting more pervasive in the world as the technology scales down in size, making it easier to fit inside daily objects, a lot of small radios and sensors that no longer need any power source are being made. As the sensors find their way to more devices and start communicating with each other, people start to make a situation wherein they could derive a bigger context from the surroundings to help with the daily life.

Mobiles are truly the clear choice for interactions with devices and sensors. In general, smart phones have become a huge part of the daily life. People communicate with them, use them for navigating, do work on them and pass the time by playing games on them. By using mobile devices are the gateway to the IoT, we could add more context into all activities. Furthermore, the devices could add a much deeper and richer context to simple games through interacting with the environment around and acquiring information from the sensors to boost and personalize the player experience.

At present, there are no true standard in the world of the IoT. There are some groups that work on standardize communication or hardware, such as the IEEE Standards Association or the International Telecommunications Union or ITU. Nonetheless, there are groups working to create their own standards and protocols since they want to lock down the technology. Although this is still a setback, it highlights the reason why a mobile device is the ideal technology to overcome this roadblock. It already possesses various radios and in the application driven world of smart phones, it is not as difficult to integrate a new protocol. Thus, while devices and sensors move along nicely, the lack of standards and protocols could very well limit the real potential of the IoT.

The 5G by Verizon and AT&T is the fifth-generation mobile network that could be far more transformational than the previous ones. It accelerates the adoption of IoT. According to an estimate, the number of connected things have the potential to more than double to 50-billion worldwide by the year 2020 and could even reach 500-billion ten years after. The 5G rollout represents the largest expansion of the web to date and can generate billions of dollars of business for brands such as Nokia, Cisco, Ericson, Qualcomm and Intel. All of them are vying to create the nuts and bolts of the new networks.

5G is beyond tablets and smart phones. It could provide a great opportunity to transform the world. There’s no question that it would happen, it’s just not clear who would benefit most since much of the technology would still be tested in the real world. People expect it to be more responsive. For many years, the industry focused largely on the amount of data it could force through the system in a particular time, a concept that is known as throughput. The new focus now is latency, or how fast a network responds to a request. According to the Forrester research, latency matters. The next step would be to get billions of things directly talking to each other instead of going through networks that are centrally controlled, as most connected devices do today. Once this will happen, whole new vistas would open up.