Monday 2 March 2015

Artemis Testing Super Fast Alternative to LTE

Artemis Testing Super Fast Alternative to LTE 
Startup Artemis Networks has a technology it believes can take wireless networks to the next level. And the company may soon have the opportunity to prove it. Dish Network is making possible the world’s first pCell wireless technology deployment.
Through its wholly-owned subsidiary American H Block Wireless, Dish is planning to hand over some H Block mobile spectrum in San Francisco to Artemis for up to two years for a field test. The only hurdle is FCC approval -- Artemis has to get the commission's OK to move forward with the test. A new approach to wireless, pCell has the potential to be revolutionary.
Indoor testing has already demonstrated it can deliver full-speed mobile data to every mobile device at the same time -- no matter how many users are sharing the same spectrum. The end result: greater capacity than conventional LTE. The most advanced conventional LTE networks average 1.7 bps/Hz in spectral efficiency. By contrast, pCell posts an average of 58 bps/Hz. That's 35 times faster than conventional LTE.
Will it Really Work?
"The Artemis I Hub enables partners to test pCell in indoor and venue scenarios using off-the-shelf LTE devices, such as iPhone 6/6 Plus, iPad Air 2 and Android devices,” said Steve Perlman, Artemis founder and CEO.
Here’s how it works: Instead of avoiding interference like conventional wireless technologies, pCell technology actually exploits interference. The technology combines interfering radio waves to create an unshared personal cell, or pCell, for each LTE device. This sets the stage to provide the full wireless capacity to each user at once, even at extremely high user density, according to the company.
We asked Jeff Kagan, an independent technology analyst, for his take on pCell. He told us it’s an interesting idea. Of course, we still don't yet know whether it will work in real world operations, he added.
“If it does work as advertised, it could alleviate some of the pressures on traditional networks like LTE in areas like stadiums where there are large groups in a small area. Of course this is not automatic," he said.
Stretching the Limits
Indeed, customers still have to insert Artemis SIM cards into LTE devices to take advantage of the service -- unless they have devices that carry the new universal SIM. In that case, consumers would choose Artemis as their LTE service on the screens of their devices. The devices would then connect to Artemis pCell service as they would to any LTE service. However, most consumers don’t have devices that carry the universal SIM.
“This is an idea that is needed as we stretch the limits of the way we currently provide wireless data,” Kagan said. “This also inserts another company into the mix -- a company that will charge for its services. We really have more questions than answers today, but it's an interesting new approach.”
Beyond the Dish news, Artemis is also rolling out the Artemis I Hub for venue and indoor trials. The Artemis I Hub provides pCell service through 32 distributed antennas and promises to deliver up to 1.5 Gbps in shared spectrum to off-the-shelf LTE devices, with frequency agility from 600 MHz to 6 GHz. That would enable pCell operation in any mobile band.

Google Steps Up Chrome Warnings for Safer Surfing

Google Steps Up Chrome Warnings for Safer SurfingTech giant Google wants to save Internet users from themselves. The company's Chrome Web browser will now warn users before they visit sites that might encourage them to download programs or malware that could cripple their computers or otherwise interfere with their Web-browsing experience.
When users attempt to visit one of the questionable sites, they will see this warning in red letters: "The site ahead contains harmful programs."
The warning, part of what Google is terming SafeBrowsing, informs users that attackers may attempt to trick them into installing programs that harm their browsing experiences by changing their homepages or showing extra ads on the sites they visit, for example.
Two Categories
Google said the unsafe sites fall into two categories. One group consists of malware sites that contain code to install malicious software onto users’ computers. Hackers can use this malicious software to capture and transmit users' private or sensitive information. The other category consists of phishing sites that pretend to be legitimate while trying to trick users into typing in their usernames and passwords or sharing other private information.
The new precautions also extend to Google search and ads. Search now incorporates signals that identify deceptive sites, and Google recently began disabling ads that lead to sites with unwanted software.
"We're constantly working to keep people safe across the Web," Google Software Engineer Lucas Ballard wrote in a blog post Monday. "SafeBrowsing helps keep you safe online and includes protection against unwanted software that makes undesirable changes to your computer or interferes with your online experience."
Google said that about a billion people use SafeBrowsing. That means the company has a lot to gain by making the browsing experience as safe as possible since the Google search engine is the company’s primary generator of income.
Site Owners Beware
Site owners are also being targeted as part of the new initiative. They can register with Google Webmaster Tools to be notified when Google finds something on their sites that might lead people to download unwanted software. If that happens, Google said it will offer up tips to help them resolve the problems.
As part of that initiative, Google said it measures how quickly Webmasters clean up their sites after receiving notifications that their sites have been compromised. Even after a site has been cleaned, it can become reinfected if an underlying vulnerability remains, according to Google, which tracks the reinfection rate for those sites.
Google has had SafeBrowsing malware warnings in place for three years, but it was only last November that it added automatic malware blocking. At that time, Google noted that if users see malicious file warnings on Web sites going forward, "you can click 'Dismiss' knowing that Chrome is working to keep you safe."
The new protections emerged in the wake of last week's discovery that new Lenovo PCs shipped between September and December came pre-installed with adware known as Superfish, which uses a man-in-the-middle attack to insert ads into Web browsers.

Can Windows 10 Win Back Users?

Can Windows 10 Win Back Users?
The next generation of Windows is aiming to fix everything that was wrong with the last generation. Can Microsoft reverse its fortunes with Windows 10? Microsoft’s self-stated ambitious goal with Windows 10 is to inspire new scenarios across a broad range of devices, from big screens to small screens to no screens at all.
Terry Myerson, executive vice president of Microsoft’s Operating System Group, called Windows 10 the first step to an era of more personal computing. “This vision framed our work on Windows 10, where we are moving Windows from its heritage of enabling a single device -- the PC -- to a world that is more mobile, natural and grounded in trust,” Myerson said. “We believe your experiences should be mobile -- not just your devices. Technology should be out of the way and your apps, services and content should move with you across devices, seamlessly and easily.”
Windows’ Main Competition
Of course, having that vision is one thing. Delivering on it is another. We caught up with Rob Enderle, principal analyst at the Enderle Group, to get his thoughts on whether or not Windows 10 can win back users. First, he told us, Windows is still dominant against other current PC operating systems.
“Windows 10’s competition now is mostly Windows XP, iOS, and Android and often more about form factor than OS features,” Enderle said. “Windows 10 needs an easier migration path to help with XP users as that platform is just too far back to move easily and it hasn’t reached critical mass on tablets or smartphones.”
What’s more, Windows 10 also addresses the negative issues surrounding Windows 8, such as the missing start button, Enderle said. He’s betting the new operating system should be a far stronger alternative to the last version.
Not Fully Cooked?
Although Enderle doesn’t think Windows 10 is as strong as Microsoft could have made it, he said to truly take the market back from iOS and Android, Redmond needs the kind of exclusive OEM support it once had. But that appears to be outside of Microsoft’s reach and the capability of any product.
“Microsoft does appear to be fixing their relationship with Intel and OEMs actually prefer them over Google but the market moves where the user is,” Enderle said. “To capture the user they’ll need a magical product, hardware and software, much like the iPod became and the iPhone and iPad started out being.”
As Enderle sees it, this is a combination of software, hardware, and services that creates a unique product -- one that a critical mass of consumers can’t refuse. Windows 10 will be adequate to this task, but the other two parts -- hardware and services -- of this effort aren’t fully cooked yet, he said.

Algorithm Teaches Itself To Be a Better Gamer than You

Algorithm Teaches Itself To Be a Better Gamer than You
Playing Breakout on an old Atari 2600 might not seem like cutting-edge computing, but it is when a computer algorithm learns on its own how to play that and other games as well as humans. In a paper published Thursday in the journal Nature, researchers from Google-owned DeepMind describe how their "deep Q-network," or DQN, did better than any previous machine-learning algorithms in mastering 43 of 49 classic Atari video games.
Starting with just the pixels on the game screen, a set of available actions and a reward system as an incentive for earning higher game scores, DQN was able to figure out such games as Breakout, Enduro racing, Pong, Space Invaders, River Raid and Q*bert. In half of the games, the algorithm "learned" how to play at "more than 75 percent of the level of a professional human player."
DeepMind, founded in 2011 and based in London, was acquired by Google in early 2014 (reports put the sales price at between $400 million and $650 million). The company researches machine learning and artificial intelligence, something with which Google has long been interested.
An Eye on Smarter Google Apps
Describing the new game-learning research Wednesday in a post on Google's Research Blog, DeepMind's Dharshan Kumaran and Demis Hassabis said DQN could help lead to smarter computing with practical, daily applications for people.
"This work offers the first demonstration of a general purpose learning agent that can be trained end-to-end to handle a wide variety of challenging tasks, taking in only raw pixels as inputs and transforming these into actions that can be executed in real-time," Kumaran and Hassabis said. "This kind of technology should help us build more useful products -- imagine if you could ask the Google app to complete any kind of complex task ('Okay, Google, plan me a great backpacking trip through Europe!')."
We caught up with Hassabis, who is vice president for engineering at DeepMind, to elaborate on future uses.
"From a more concrete applications point of view, our team is generally interested in things like Search and other core Google efforts -- baking better 'smarts' into services," Hassabis told us. "Ultimately, we'd like to help tackle bigger problems, too, like helping researchers make sense of the incredibly complex systems in climate science, medicine, genomics, etc."
Despite such potentially useful applications, the rapid advances in machine learning in recent years has led even a few of science's and technology's top minds -- including Stephen Hawking, Bill Gates and Elon Musk -- to describe artificial intelligence as a possible threat to humanity. DeepMind has also given the implications of its research some thought: around the time of Google's acquisition, members of the DeepMind team reportedly pushed for Google to establish an AI ethics board.
AI Pinball Wizard
DQN, Kumaran and Hassabis wrote, achieved its latest successes through the combination of artificial neural networks -- called deep neural networks -- and reinforcement learning, a framework that gave the algorithm the goal of maximizing future rewards by earning higher scores. To enable the algorithm to "learn" video-game-playing skills effectively, DeepMind also had to find a way to emulate another human condition: sleep.
During the learning phase, Kumaran and Hassabis said DQN was "trained on samples drawn from a pool of stored episodes," a mechanism called "experience replay." That process is similar to how the human hippocampus draws on declarative and episodic memories for dreams during sleep.
In fact, if DQN could not "sleep" or "dream," it couldn't improve its gaming skills as well.
"The incorporation of experience replay was critical to the success of DQN: disabling this function caused a severe deterioration in performance," Kumaran and Hassabis said.
Among the games DQN did best at -- "human-level or above" -- were video pinball, boxing, Breakout, Star Gunner, Robotanks, Atlantis, Crazy Climber and Gopher. Games where its brand of machine learning didn't work so well, on the other hand, included Montezuma's Revenge, Private Eye, Gravitar, Frostbite, Ms. Pac-Man and bowling.

Lizard Squad Hacks Ailing Lenovo's Web Site

Lenovo’s Web site was hacked on Wednesday, giving the PC giant’s security team another black eye before it has even healed from the Superfish fiasco. The Lizard Squad claimed responsibility for the attacks via its Twitter account.
The hacker posted an e-mail exchange between Lenovo employees discussing Superfish, according to a Reuters report. Then the group followed up with another threat on Twitter: “We’ll comb the Lenovo dump for more interesting things later.”
Beyond the e-mail exchanges, the Lizard Squad also hijacked Lenovo’s content and replaced it with a slideshow of young people peering into webcams and the song “Breaking Free” from the movie “High School Musical” playing in the background, The Verge reported.
Lenovo Regrets the ‘Inconvenience’
Lenovo, the world’s largest PC maker, has been criticized for shipping laptops pre-installed with a virus-like software that puts customers in the line of hacker fire. Since June, Lenovo customers have been reporting a program called Superfish, software that automatically displays advertisements in the name of helping consumers find products online.
The problem is more serious than first thought. Last Friday, Facebook's Threat Infrastructure team issued an analysis of the adware, which concluded that “the new root CA (certificate authority) undermines the security of Web browsers and operating systems, putting people at risk."
After that, security researcher Filippo Valsorda called Superfish adware “catastrophic," saying that's “the only way all this mess could have been worse” because the Superfish proxy, which uses a Komodia content inspection engine, can be made to allow self-signed certificates without warnings. That opens the door to man-in-the middle attacks.
"We regret any inconvenience that our users may have if they are not able to access parts of our site at this time," the company said in a published statement. "We are actively reviewing our network security and will take appropriate steps to bolster our site and to protect the integrity of our users' information."
Blind to Risks
We caught up with Ken Westin, a security analyst at advanced threat protection firm Tripwire, to get his thoughts on the attack. He told us the lesson of the Superfish debacle is this: something that seemed like a good idea at the time to one group can have devastating consequences for a company as a whole.
“The deployment of Superfish compromised Lenovo customers’ privacy and security, and now hacking groups have essentially declared it open season against Lenovo. This whole event demonstrates what happens when businesses fail to take security and privacy into consideration, especially when adding new features that can invade customer privacy and weaken system security,” Westin said.
“Unfortunately, those responsible for security and privacy are often not part of the decision-making process, or are even aware these tools are deployed, so organizations may leave themselves blind to these risks," he added.

Facebook's 2014 Bug Bounty Program Awarded $1.3M


Facebook paid $1.3 million to 321 hackers worldwide last year who helped spot security flaws in the social network's software
 
"Every year we are surprised by what we learn from the security community, and 2014 was no exception," Collin Greene, Facebook's security engineer, wrote in a blog post Wednesday morning.
Started in 2011, Facebook's "bug bounty" program awards money to people who report security gaps to the company. 

There were 17,011 reports submitted to Facebook's bug bounty program in 2014, an increase of 16 percent compared to 2013. There were also more severe security gaps reported to the social network last year, according to the blog post. 

That includes flaws that would allow hackers to upload content in Facebook's and Instagram servers, view a user's private messages and post on their timelines.
Researchers in India reported the highest number of bugs followed by Egypt, the United States, the United Kingdom and the Philippines.
The average payout in the United States was $2,470, and 61 bugs were reported. Worldwide, the average payout was $1,788.
The minimum award from Facebook for spotting a security bug is $500 and there is no limit on how high an award can go.
The largest bounty in 2014 was $30,000, which was paid to someone in Lithuania, according to Facebook. Since Facebook started its bug bounty program in 2011, the social network has paid out more than $3 million.
The program continues to grow. There have already been more than 100 reports of security flaws submitted to the social network this year.
Other companies such as Google and Yahoo also have bug bounty programs. But a 2014 report by the RAND corporation also noted that the black market for consumer data is growing and can be more profitable than the illegal drug trade.

Life Possible on Saturn’s Moon Titan


Life Possible on Titan
A representation of a 9-nanometer azotosome, about the size of a virus, with a piece of the membrane cut away to show the hollow interior.
In a new study, chemical engineers and astronomers from Cornell University reveal that Titan could harbor methane-based, oxygen-free cells that metabolize, reproduce and do everything life on Earth does.
Liquid water is a requirement for life on Earth. But in other, much colder worlds, life might exist beyond the bounds of water-based chemistry.
Taking a simultaneously imaginative and rigidly scientific view, Cornell chemical engineers and astronomers offer a template for life that could thrive in a harsh, cold world – specifically Titan, the giant moon of Saturn. A planetary body awash with seas not of water, but of liquid methane, Titan could harbor methane-based, oxygen-free cells that metabolize, reproduce and do everything life on Earth does.
Their theorized cell membrane, composed of small organic nitrogen compounds and capable of functioning in liquid methane temperatures of 292 degrees below zero, is published in Science Advances, February 27. The work is led by chemical molecular dynamics expert Paulette Clancy, the Samuel W. and Diane M. Bodman Professor of Chemical and Biomolecular Engineering, with first author James Stevenson, a graduate student in chemical engineering. The paper’s co-author is Jonathan Lunine, the David C. Duncan Professor in the Physical Sciences in the College of Arts and Sciences’ Department of Astronomy.
Lunine is an expert on Saturn’s moons and an interdisciplinary scientist on the Cassini-Huygens mission that discovered methane-ethane seas on Titan. Intrigued by the possibilities of methane-based life on Titan, and armed with a grant from the Templeton Foundation to study non-aqueous life, Lunine sought assistance about a year ago from Cornell faculty with expertise in chemical modeling. Clancy, who had never met Lunine, offered to help.
“We’re not biologists, and we’re not astronomers, but we had the right tools,” Clancy said. “Perhaps it helped, because we didn’t come in with any preconceptions about what should be in a membrane and what shouldn’t. We just worked with the compounds that we knew were there and asked, ‘If this was your palette, what can you make out of that?’”
On Earth, life is based on the phospholipid bilayer membrane, the strong, permeable, water-based vesicle that houses the organic matter of every cell. A vesicle made from such a membrane is called a liposome. Thus, many astronomers seek extraterrestrial life in what’s called the circumstellar habitable zone, the narrow band around the sun in which liquid water can exist. But what if cells weren’t based on water, but on methane, which has a much lower freezing point?
The engineers named their theorized cell membrane an “azotosome,” “azote” being the French word for nitrogen. “Liposome” comes from the Greek “lipos” and “soma” to mean “lipid body;” by analogy, “azotosome” means “nitrogen body.”
The azotosome is made from nitrogen, carbon and hydrogen molecules known to exist in the cryogenic seas of Titan, but shows the same stability and flexibility that Earth’s analogous liposome does. This came as a surprise to chemists like Clancy and Stevenson, who had never thought about the mechanics of cell stability before; they usually study semiconductors, not cells.
The engineers employed a molecular dynamics method that screened for candidate compounds from methane for self-assembly into membrane-like structures. The most promising compound they found is an acrylonitrile azotosome, which showed good stability, a strong barrier to decomposition, and a flexibility similar to that of phospholipid membranes on Earth. Acrylonitrile – a colorless, poisonous, liquid organic compound used in the manufacture of acrylic fibers, resins and thermoplastics – is present in Titan’s atmosphere.
Excited by the initial proof of concept, Clancy said the next step is to try and demonstrate how these cells would behave in the methane environment – what might be the analogue to reproduction and metabolism in oxygen-free, methane-based cells.
Lunine looks forward to the long-term prospect of testing these ideas on Titan itself, as he put it, by “someday sending a probe to float on the seas of this amazing moon and directly sampling the organics.”
Stevenson said he was in part inspired by science fiction writer Isaac Asimov, who wrote about the concept of non-water-based life in a 1962 essay, “Not as We Know It.”
Said Stevenson: “Ours is the first concrete blueprint of life not as we know it.”
Publication: James Stevenson, et al., “Membrane alternatives in worlds without oxygen: Creation of an azotosome,” Science Advances, 2015, Vol. 1 no. 1 e1400067; doi: 10.1126/sciadv.1400067
Source: Anne Ju, Cornell University

Tuesday 20 January 2015

Looking for a Free Backup Solution? Try Areca

Areca Backup is an open source file backup utility that comes with a lot of features, while also being easy to use. It provides a large number of backup options, which make it stand out among the various other backup utilities. This article will help you learn about its features, installation and use on the Linux platform.
Areca Backup is personal file backup software written in Java by Olivier Petrucci and released under GNU GPL v2. It’s been extensively developed to run on major platforms like Windows and Linux, providing users a large number of configurable options with which to select their files and directories for backup, choose where and how to store them, set up post-backup actions and much more. This article deals with Areca on the Linux platform.
Features
To start with, it must be made clear that Areca is by no means a disk-ghosting application. That is, it will not be able to make an image of your disk partitions (as Norton Ghost does), mainly because of file permissions. Areca, along with a backup engine, includes a great GUI and CLI. It’s been designed to be as simple, versatile and interactive as possible. A few of the application’s features are:
  • Zip/Zip64  compression and AES 128/AES 256 archive encryption algorithms
  • Storage on local drive, network drive, USB key, FTP/FTPs (with implicit and explicit SSL/TLS encryption) or SFTP server
  • Incremental, differential and full backup support
  • Support for delta backup
  • Backup filters (by extension, sub-directory, regexp, size, date, status and usage)
  • Archive merges
  • As of date recovery
  • Backup reports
  • Tools to help you handle your archives easily and efficiently, such as Backup, Archive Recovery, Archive Merge, Archive Deletion, Archive Explorer, History Explorer
Installation
Areca is developed in Java, so you need to have the Java Virtual Machine v1.4 or higher already installed and running on your system. You can verify this by checking it in the command-line:
$ java -version
In case you come up with a false result, you can download and install it from http://java.sun.com/javase/downloads/index.jsp
To install Areca, you need to download the latest release from http://sourceforge.net/project/showfiles.php?group_id=171505 and retrieve its contents on your disk. To make Areca executable from the console, go to the extracted Areca directory and run the commands given below:
$ chmod a+x areca.sh areca_check_version.sh
$ chmod a+x -v bin/*
Now you can easily launch Areca from your console with
  • ./areca.sh for Graphical User Interface
  • ./bin/run_tui.sh for Command Line Interface
Now that you’ve set up the entire thing, let’s understand the basics of Areca—what you’ll need to know before getting started with creating your first backup archive.
Basics Storage modes: Areca follows three different storage modes.
  • Standard (by default), where a new archive is created on each backup.
  • Delta (for advanced users), where a new archive is created on each backup, consisting of modified parts of files since the last backup.
  • Image is a unique backup created, which updates on each backup.
Target: A backup task is termed as ‘target’ in Areca’s terminology. A target defines the following things.
  • Sources: It defines the files and directories to be stored in the archive at backup.
  • Destination: It defines the place to store your archives such as file system (external hard drive, USB key, etc) or even your FTP server.
  • Compression and encryption: You may even define how to store your archives, i.e., compressing into a Zip file if data is large or encrypting the archival data to keep it safe, so that it can be decrypted only by using Areca with the correct decryption key.
Your first backup with Areca
After successfully passing through all the checkpoints, you can now move on to creating your first backup with Areca. First, execute the Areca GUI by running ./areca.sh from the console. You’ll see a window (as shown in Figure 1) open up on your screen. Let’s configure a few things.
Set your workspace: The section on the left of the window is your workspace area. The Select button here can be used to set your workspace location. This should be the safe location on your computer, where Areca saves its configuration files. You can see the default workspace location here.
Figure 2 create a new target (child window)

Figure 2
Figure 3 The main window show your current targets
Set your target: Now you need to set up your target in order to run your first backup. Go to Edit > New Target. You’ll have something like what’s shown in Figure 2. Now set your Target name, Local Repository  (this is where your backup archive is saved), Archive’s name and also Sources by switching the tab at the left, and then do any other configuration you’d like to. Next, click on Save. Your target has been created. Your main window now looks something like what’s shown in Figure 3.
Running your backup: After doing all that is necessary, you can run your first backup. Go to Run > Backup. Then select Use Default Working Directory to use a temporary sub-directory (created at the same location as the archives). Click on Start Backup. Great, so you have now created your first backup.
Recovery: You have a backup archive of your data now. This may be used at any time to recover your lost data. Just select your target from the workspace on the left and right click on the archive on the right section, which you wish to use to recover your data. Click Recover, choose the location, and click OK.
At this stage, you can easily create backups using the Areca GUI. However, you can further learn to configure your backups at http://areca-backup.org/tutorial.php.
Using the command line interface
You just used the Areca GUI to create a backup and recover your data again. Although the GUI is the preferred option, you may use the CLI, too, for the same purpose. This may seem good to those comfortable with the console. However, this is also useful in the case of scheduled backups.
To run it, just go to the Areca directory and follow up with the general syntax below:
$ ./bin/run_tui.sh <command> <options>
Here are the few basic commands you’ll need to create backups of your data and recover it using the console. All you need to have as a prerequisite is the areca config xml file, which you must generate from the GUI; else, http://areca-backup.org/config.php is good to follow.
1.     You may get the textual description of a target group by using the describe command as shown below:
$ ./bin/run_tui.sh describe -config <your xml config file>
2.     You may launch a backup on a target or a group of targets using the backup command as follows:
$ ./bin/run_tui.sh backup -config <your xml config file> [-target <target>] [-f] [-d] [-c] [-s] [-title <archive title>]
Here, [-f], [-d], [-c], [-s] are used in the case of a full backup, differential backup, for checking archive consistency after backup and for target groups, respectively.
3.     If you have a backup, recover your data easily using recover as follows:
$ ./bin/run_tui.sh recover -config <config file> -target <target> -destination <destination folder> -date <recovery date: YYYY-MM-DD> [-c]
Here [-c] is to check and verify the recovered data.
You can learn more about command line usage at http://areca-backup.org/documentation.php.
Final verdict
The Areca Backup tool is one of the best personal file backup tools when you look for options in open source. Despite having a few limitations such as no support for VSS (Volume Shadow Copy Service) and its inability to create backups for files locked by other programs, Areca serves users well due to its wide variety of features. Moreover, it has a separate database of plugins which may be used to overcome almost all of its limitations. If you are looking for a personal file backup utility, go for nothing but Areca.

Create Your First App with Android Studio

Android Studio is a new Android development environment developed by Google. It is based on IntellJ IDEA, which is similar to Eclipse with the ADT plugin. Let’s get familiar with the installation of Android Studio and some of the precautions that must be taken during installation.
Android is gaining market share and opening up new horizons for those who want to develop Android apps. Android app development doesn’’t require any investments because all the tools needed for it are free. It has been quite a while since Android app development began and most of us are aware of how things work. Just install Java, then install Eclipse, download the ADT (Android Development Toolkit) bundle, do a bit of configuration and you are all set to develop Android apps. Google provides us with a new IDE called Android Studio, which is based on IntellJ IDEA. It is different from Eclipse in many ways. The most basic difference is that you don’t have to do any configuration like you would have to for Eclipse. Android Studio comes bundled with the Android ADT, and all you need to do is to point it to where Java is installed on your system. In this article, I will cover a few major differences between Android Studio and the Eclipse+ADT plugin methods. Android Studio is currently available as an “‘easy access preview” or developer preview, so several features will not be available and there are chances that you may encounter bugs. First, let’’s install Android Studio. I’’m assuming your’s is a Windows OS with pre-installed JDK as the developer machine’s configuration. One thing to check is that the JDK version is later than version 6. Next, go to the link:http://developer.android.com/sdk/installing/studio.html. Here, you’’ll see the button for downloading Android Studio. The Web page automatically recognises your OS and provides you with the compatible version. If you need to download for some other OS, just click on ‘Download for other Platforms’ (refer to Figure 1). Once downloaded, you can follow the set-up wizard. There might be a few challenges. For example, at times, for Windows systems, the launcher script isn’’t able to find Java. So you need to set an environment variable called JAVA_HOME and point it to your JDK folder. If you are on Windows 8, you can follow these steps to set an environment variable: click on Computer -> Properties -> Advanced System Settings -> Advanced Tab (on the System Properties Dialogue) -> Environment Variables. Then, underSystem Variables, click on New. Another problem might be the PATH variable. In the same manner as above, reach the Environment Variable dialogue box, and there, instead of creating a new variable, find the existing PATH variable and edit it. To the existing value, just add a semicolon at the end (if it’s not already there) and add the path to the bin folder of the JDK. Also, please note that if you are working with a 64-bit machine, the path to JDK should be something like: C:\Program Files\Java\jdk1.7.0_21 and not C:\Program Files (x86)\Java\jdk1.7.0. If you don’t have it in the former location, it means that a 64-bit version of Java isn’t installed on your system; so install that first.
Figure 1 - Download Android Studio
Figure 1 – Download Android Studio

Figure 2  Welcome Screen
Figure 2 Welcome Screen
Now that the set-up is complete, we can go ahead and directly launch the Android Studio. There is no need to download the ADT plugin and configure it. When you launch it, you can see the Welcome screen (refer to Figure 2), which is very powerful and deep. You can directly check out the Version Control Systems from the Welcome screen itself. The Version Control Systems supported are GitHub, CVS, Git, Mercurial and Subversion. Then, from the Configure menu within the Welcome screen, you can configure the SDK manager, plugins, import/export settings, project default settings and the overall settings for the IDE—all this without even launching the IDE. You can also access the Docs and the How-Tos from the Welcome screen. Next, the New Project screen is almost similar to what it looked like in Eclipse, but now there’s no need to select Android Application or anything else. You are directly at the spot from where you can start off a new Android Project (refer to Figure 3). Among other interesting things about Android Studio is the ‘Tip of the day’ section (refer to Figure 4), which makes you familiar with the IDE.
Figure 3 New Project
Figure 3 New Project
Figure 4 Tip of the day
Figure 4 Tip of the day
Figure 5 Different Layout Preview
Figure 5 Different Layout Preview
Now, let’’s focus on some specific features that come with Android Studio (and quoting directly from the Android Developers Web page):
  • Gradle-based build support.
  • Android-specific refactoring and quick fixes.
  • Lint tools to catch performance, usability, version compatibility and other problems.
  • ProGuard and app-signing capabilities.
  • Template-based wizards to create common Android designs and components.
  • A rich layout editor that allows you to drag-and-drop UI components, preview layouts on multiple screen configurations, and much more.
  • Built-in support for Google Cloud Platform, making it easy to integrate Google Cloud Messaging and App Engine as server-side components.
One of the major changes with respect to Eclipse is the use of Gradle. Previously, Android used Ant for build, but with Android Studio, this task has been taken over by Gradle. In last year’s Google I/O, sources at Google had talked about the new Android build system –- Gradle. To quote from the Gradle website: “Google selected Gradle as the foundation of the Android SDK build system because it provides flexibility along with the ability to define common standards for Android builds. With Gradle, Android developers can use a simple, declarative DSL to configure Gradle builds supporting a wide variety of Android devices and App stores. With a simple, declarative DSL, Gradle developers have access to a single, authoritative build that powers both the Android Studio IDE and builds from the command-line.” Owing to Gradle, people will also notice a change in the project structure as compared to the project structure in Eclipse. Everything now resides inside the SRC folder. But from a developer’s perspective, it is essentially all still the same. The other major, and rather useful, change is the ability to preview the layout on different screen sizes (refer to Figure 5, as shown during the Google I/O last year). While retaining the drag-and-drop designer function, the text mode has the preview pane to the right which allows for previewing the layout on various screen sizes. There is also an option for making a landscape variation design for the same app without having to do anything much on the code level. This is just the tip of the iceberg, and the features discussed above are amongst the major changes in terms of build and layout designing. I would encourage zealous developers who want to try out this IDE to visit the Android developers’ page and check out the Android Studio section. It is definitely a different way to approach Android app development, with the focus shifting towards development rather than configuration and management.