Get it on Google Play تحميل تطبيق نبأ للآندرويد مجانا

Wide Color Photos Are Coming to Android: Things You Need to Know to be Prepared

Android Developers Blog

Posted by Peiyong Lin, Software Engineer

Android is now at the point where sRGB color gamut with 8 bits per color channel is not enough to take advantage of the display and camera technology. At Android we have been working to make wide color photography happen end-to-end, e.g. more bits and bigger gamuts. This means, eventually users will be able to capture the richness of the scenes, share a wide color pictures with friends and view wide color pictures on their phones. And now with Android Q, it's starting to get really close to reality: wide color photography is coming to Android. So, it's very important to applications to be wide color gamut ready. This article will show how you can test your application to see whether it's wide color gamut ready and wide color gamut capable, and the steps you need to take to be ready for wide color gamut photography.

But before we dive in, why wide color photography? Display panels and camera sensors on mobile are getting better and better every year. More and more newly released phones will be shipped with calibrated display panels, some are wide color gamut capable. Modern camera sensors are capable of capturing scenes with a wider range of color outside of sRGB and thus produce wide color gamut pictures. And when these two come together, it creates an end-to-end photography experience with more vibrant colors of the real world.

At a technical level, this means there will be pictures coming to your application with an ICC profile that is not sRGB but some other wider color gamut: Display P3, Adobe RGB, etc. For consumers, this means their photos will look more realistic.

Display P3

SRGB

Display P3

SRGB

Above are images of the Display P3 version and the SRGB version respectively of the same scene. If you are reading this article on a calibrated and wide color gamut capable display, you can notice the significant difference between these them.

Color Tests

There are two kinds of tests you can perform to know whether your application is prepared or not. One is what we call color correctness tests, the other is wide color tests.

Color Correctness test: Is your application wide color gamut ready?

A wide color gamut ready application implies the application manages color proactively. This means when given images, the application always checks the color space and does conversion based on its capability of showing wide color gamut, and thus even if the application can't handle wide color gamut it can still show the sRGB color gamut of the image correctly without color distortion.

Below is a color correct example of an image with Display P3 ICC profile.

However, if your application is not color correct, then typically your application will end up manipulating/displaying the image without converting the color space correctly, resulting in color distortion. For example you may get the below image, where the color is washed-out and everything looks distorted.

Wide Color test: Is your application wide color gamut capable?

A wide color gamut capable application implies when given wide color gamut images, it can show the colors outside of sRGB color space. Here's an image you can use to test whether your application is wide color gamut capable or not, if it is, then a red Android logo will show up. Note that you must run this test on a wide color gamut capable device, for example a Pixel 3 or Samsung Galaxy S10.

What you should do to prepare

To prepare for wide color gamut photography, your application must at least pass the wide color gamut ready test, we call it color correctness test. If your application passes the wide color gamut ready tests, that's awesome! But if it doesn't, here are the steps to make it wide color gamut ready.

The key thing to be prepared and future proof is that your application should never assume sRGB color space of the external images it gets. This means application must check the color space of the decoded images, and do the conversion when necessary. Failure to do so will result in color distortion and color profile being discarded somewhere in your pipeline.

Mandatory: Be Color Correct

You must be at least color correct. If your application doesn't adopt wide color gamut, you are very likely to just want to decode every image to sRGB color space. You can do that by either using BitmapFactory or ImageDecoder.

Using BitmapFactory

In API 26, we added inPreferredColorSpace in BitmapFactory.Option, which allows you to specify the target color space you want the decoded bitmap to have. Let's say you want to decode a file, then below is the snippet you are very likely to use in order to manage the color:

final BitmapFactory.Options options = new BitmapFactory.Options();
// Decode this file to sRGB color space.
options.inPreferredColorSpace = ColorSpace.get(Named.SRGB);
Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);

Using ImageDecoder

In Android P (API level 28), we introduced ImageDecoder, a modernized approach for decoding images. If you upgrade your apk to API level 28 or beyond, we recommend you to use it instead of the BitmapFactory and BitmapFactory.Option APIs.

Below is a snippet to decode the image to an sRGB bitmap using ImageDecoder#decodeBitmap API.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    decoder.setTargetColorSpace(ColorSpace.get(Named.SRGB));
                }
            });
} catch (IOException e) {
    // handle exception.
}

ImageDecoder also has the advantage to let you know the encoded color space of the bitmap before you get the final bitmap by passing an ImageDecoder.OnHeaderDecodedListener and checking ImageDecoder.ImageInfo#getColorSpace(). And thus, depending on how your applications handle color spaces, you can check the encoded color space of the contents and set the target color space differently.

ImageDecoder.Source source =
        ImageDecoder.createSource(FILE_PATH);
try {
    bitmap = ImageDecoder.decodeBitmap(source,
            new ImageDecoder.OnHeaderDecodedListener() {
                @Override
                public void onHeaderDecoded(ImageDecoder decoder,
                        ImageDecoder.ImageInfo info,
                        ImageDecoder.Source source) {
                    ColorSpace cs = info.getColorSpace();
                    // Do something...
                }
            });
} catch (IOException e) {
    // handle exception.
}

For more detailed usage you can check out the ImageDecoder APIs here.

Known bad practices

Some typical bad practices include but are not limited to:

All these cause a severe users perceived results: Color distortion. For example, below is a code snippet that results in the application not color correct:

// This is bad, don't do it!
final BitmapFactory.Options options = new BitmapFactory.Options();
final Bitmap bitmap = BitmapFactory.decodeFile(FILE_PATH, options);
glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES31.GL_RGBA, bitmap.getWidth(),
        bitmap.getHeight(), 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLUtils.texSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, bitmap,
        GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE);

There's no color space checking before uploading the bitmap as the texture, and thus the application will end up with the below distorted image from the color correctness test.

Optional: Be wide color capable

Besides the above changes you must make in order to handle images correctly, if your applications are heavily image based, you will want to take additional steps to display these images in the full vibrant range by enabling the wide gamut mode in your manifest or creating a Display P3 surfaces.

To enable the wide color gamut in your activity, set the colorMode attribute to wideColorGamut in your AndroidManifest.xml file. You need to do this for each activity for which you want to enable wide color mode.

android:colorMode="wideColorGamut"

You can also set the color mode programmatically in your activity by calling the setColorMode(int) method and passing in COLOR_MODE_WIDE_COLOR_GAMUT.

To render wide color gamut contents, besides the wide color contents, you will also need to create a wide color gamut surfaces to render to. In OpenGL for example, your application must first check the following extensions:

And then, request the Display P3 as the color space when creating your surfaces, as shown in the following code snippet:

private static final int EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT = 0x3490;

public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
                                      EGLConfig config, Object nativeWindow) {
  EGLSurface surface = null;
  try {
    int attribs[] = {
      EGL_GL_COLORSPACE_KHR, EGL_GL_COLORSPACE_DISPLAY_P3_PASSTHROUGH_EXT,
      egl.EGL_NONE
    };
    surface = egl.eglCreateWindowSurface(display, config, nativeWindow, attribs);
  } catch (IllegalArgumentException e) {}
  return surface;
}

Also check out our post about more details on how you can adopt wide color gamut in native code.

APIs design guideline for image library

Finally, if you own or maintain an image decoding/encoding library, you will need to at least pass the color correctness tests as well. To modernize your library, there are two things we strongly recommend you to do when you extend APIs to manage color:

  1. Strongly recommend to explicitly accept ColorSpace as a parameter when you design new APIs or extend existing APIs. Instead of hardcoding a color space, an explicit ColorSpace parameter is a more future-proof way moving forward.
  2. Strongly recommend all legacy APIs to explicitly decode the bitmap to sRGB color space. Historically there's no color management, and thus Android has been treating everything as sRGB implicitly until Android 8.0 (API level 26). This allows you to help your users maintain backward compatibility.

After you finish, go back to the above section and perform the two color tests.

May 20th 2019, 3:22 pm

Kotlin Is Everywhere! Join the global event series

Android Developers Blog

Posted by Posted by Florina Muntenescu & Wojtek Kaliciński, Developer Advocates, Android

Last week at Google I/O, we announced a big step: Android development will become increasingly Kotlin-first. It’s a language that many of you already love: over 50% of professional Android developers now use Kotlin, and it’s the fastest-growing language on GitHub. As part of this announcement, many new Jetpack APIs and features will be offered first in Kotlin. So if you’re starting a new project, you should try writing it in Kotlin; code written in Kotlin often means much less code for you–less code to type, test, and maintain.

To help you dive deeper into Kotlin, we’re happy to announce a new program we’re launching together with JetBrains: Kotlin/Everywhere, a series of community-driven events focussing on the potential of Kotlin on all platforms. We are aiming to help learn the essentials and best practices of using Kotlin everywhere, be it for Android, back-end, front-end and other platforms.

Join the Kotlin/Everywhere global event series between June and December 2019.

Who can attend the events?

Whether you are a developer, a speaker, a Kotlin User Group, a Google Developer Group member or any other community leader join us. Anyone interested in learning Kotlin and its ecosystem, sharing knowledge, and hosting a Kotlin-focused event is welcome to attend.

If you are a developer wanting to learn more about Kotlin, or a speaker excited to share your Kotlin experience with others, you can find events near you to join. Just go to the map on the website. More events will be added over time.

How to host your Kotlin/Everywhere event?

If you want to host an event in your city, you can begin by checking out the detailed organizers’ guide. It will help you to decide on the format and what kind of support you might need. All the necessary tips and tricks, materials, and branding assets are inside. Go ahead and submit your event on the official web page.

Besides the detailed organizers’ guide, we also provide you with resources such as content, codelabs, and guidance to help you maximize your success. You can also apply for support: we have speakers from Google/JetBrains and can help by providing funding for venue, food and drinks, swag, or other. We will also list your event on the official website.

Still have questions? Ask them at our hangout sessions for organizers on May 16 and 17.

Let us know if you want to take part! Apply at kotl.in/everywhere

May 16th 2019, 2:07 pm

New! Learn How to Build Android Apps with Android Jetpack and Kotlin

Android Developers Blog

Posted by Dan Galpin

Developing Android Apps with Kotlin, developed by Google together with Udacity, is our newly-released, free, self-paced online course. You'll learn how to build Android apps using industry-standard tools and libraries in the Kotlin programming language.

Android development fundamentals are taught in the context of an architecture that provides the scaffolding for robust, maintainable applications. The course covers why and how to use Android Jetpack components such as Room for databases, Work Manager for background processing, the Navigation component, and more. You'll use popular community libraries to simplify common tasks such as Glide for image loading, Retrofit for networking, and Moshi for JSON parsing. The course teaches key Kotlin features such as coroutines to help you write your app code more quickly and concisely.

As you work through the course, you'll build fun and interesting apps, such as a Mars photo gallery, a trivia game, a sleep tracker and much more.

This course is intended for people who have programming experience and are comfortable with Kotlin basics. If you're new to the Kotlin language, we recommend taking the Udacity Kotlin Bootcamp course first.

The course is available free, online at Udacity; take it in your own time at your own pace.

Come learn how to build Android apps in Kotlin with us at https://www.udacity.com/course/ud9012.

May 15th 2019, 5:23 pm

Supporting Google Play developers regarding local market withholding tax regulations

Android Developers Blog

Posted by Gloria On, Program Manager, Google Play

Many developers are increasingly focused on growing their businesses globally, and there were more than 94 billion apps downloaded from Google Play in the last year, reaching more than 190 countries. The regulatory environment is frequently changing in local markets, and in some countries local governments have implemented withholding tax requirements on transactions with which Google or our payment processor partners must comply. We strive to help both developers and Google meet local tax requirements in markets where we do business, and where Google or our payment processor partners are required to withhold taxes, we may need to deduct those amounts from our payments to developers.

Due to new requirements in some markets, we'll be rolling out withholding taxes soon to all those doing business in those countries. We wanted to bring this to the attention of Google Play developers to allow you time to prepare for these upcoming changes and take any necessary measures to meet these obligations. We strongly recommend developers consult with a professional tax advisor on your individual tax implications in affected markets and for guidance on the potential impact on your business so that you can make any necessary preparations.

The first countries where we will roll out these changes will be Saudi Arabia, Kuwait, and Myanmar. You can refer to the Google Play help center page to stay informed on future updates and changes.

How useful did you find this blog post?

May 15th 2019, 12:34 pm

Queue the Hardening Enhancements

Android Developers Blog

Posted by Jeff Vander Stoep, Android Security & Privacy Team and Chong Zhang, Android Media Team

Android Q Beta versions are now publicly available. Among the various new features introduced in Android Q are some important security hardening changes. While exciting new security features are added in each Android release, hardening generally refers to security improvements made to existing components.

When prioritizing platform hardening, we analyze data from a number of sources including our vulnerability rewards program (VRP). Past security issues provide useful insight into which components can use additional hardening. Android publishes monthly security bulletins which include fixes for all the high/critical severity vulnerabilities in the Android Open Source Project (AOSP) reported through our VRP. While fixing vulnerabilities is necessary, we also get a lot of value from the metadata - analysis on the location and class of vulnerabilities. With this insight we can apply the following strategies to our existing components:

Here’s a look at high severity vulnerabilities by component and cause from 2018:

Most of Android’s vulnerabilities occur in the media and bluetooth components. Use-after-free (UAF), integer overflows, and out of bounds (OOB) reads/writes comprise 90% of vulnerabilities with OOB being the most common.

A Constrained Sandbox for Software Codecs

In Android Q, we moved software codecs out of the main mediacodec service into a constrained sandbox. This is a big step forward in our effort to improve security by isolating various media components into less privileged sandboxes. As Mark Brand of Project Zero points out in his Return To Libstagefright blog post, constrained sandboxes are not where an attacker wants to end up. In 2018, approximately 80% of the critical/high severity vulnerabilities in media components occurred in software codecs, meaning further isolating them is a big improvement. Due to the increased protection provided by the new mediaswcodec sandbox, these same vulnerabilities will receive a lower severity based on Android’s severity guidelines.

The following figure shows an overview of the evolution of media services layout in the recent Android releases.

With this move, we now have the two primary sources for media vulnerabilities tightly sandboxed within constrained processes. Software codecs are similar to extractors in that they both have extensive code parsing bitstreams from untrusted sources. Once a vulnerability is identified in the source code, it can be triggered by sending a crafted media file to media APIs (such as MediaExtractor or MediaCodec). Sandboxing these two services allows us to reduce the severity of potential security vulnerabilities without compromising performance.

In addition to constraining riskier codecs, a lot of work has also gone into preventing common types of vulnerabilities.

Bound Sanitizer

Incorrect or missing memory bounds checking on arrays account for about 34% of Android’s userspace vulnerabilities. In cases where the array size is known at compile time, LLVM’s bound sanitizer (BoundSan) can automatically instrument arrays to prevent overflows and fail safely.

BoundSan instrumentation

BoundSan is enabled in 11 media codecs and throughout the Bluetooth stack for Android Q. By optimizing away a number of unnecessary checks the performance overhead was reduced to less than 1%. BoundSan has already found/prevented potential vulnerabilities in codecs and Bluetooth.

More integer sanitizer in more places

Android pioneered the production use of sanitizers in Android Nougat when we first started rolling out integer sanization (IntSan) in the media frameworks. This work has continued with each release and has been very successful in preventing otherwise exploitable vulnerabilities. For example, new IntSan coverage in Android Pie mitigated 11 critical vulnerabilities. Enabling IntSan is challenging because overflows are generally benign and unsigned integer overflows are well defined and sometimes intentional. This is quite different from the bound sanitizer where OOB reads/writes are always unintended and often exploitable. Enabling Intsan has been a multi year project, but with Q we have fully enabled it across the media frameworks with the inclusion of 11 more codecs.

IntSan Instrumentation

IntSan works by instrumenting arithmetic operations to abort when an overflow occurs. This instrumentation can have an impact on performance, so evaluating the impact on CPU usage is necessary. In cases where performance impact was too high, we identified hot functions and individually disabled IntSan on those functions after manually reviewing them for integer safety.

BoundSan and IntSan are considered strong mitigations because (where applied) they prevent the root cause of memory safety vulnerabilities. The class of mitigations described next target common exploitation techniques. These mitigations are considered to be probabilistic because they make exploitation more difficult by limiting how a vulnerability may be used.

Shadow Call Stack

LLVM’s Control Flow Integrity (CFI) was enabled in the media frameworks, Bluetooth, and NFC in Android Pie. CFI makes code reuse attacks more difficult by protecting the forward-edges of the call graph, such as function pointers and virtual functions. Android Q uses LLVM’s Shadow Call Stack (SCS) to protect return addresses, protecting the backwards-edge of control flow graph. SCS accomplishes this by storing return addresses in a separate shadow stack which is protected from leakage by storing its location in the x18 register, which is now reserved by the compiler.

SCS Instrumentation

SCS has negligible performance overhead and a small memory increase due to the separate stack. In Android Q, SCS has been turned on in portions of the Bluetooth stack and is also available for the kernel. We’ll share more on that in an upcoming post.

eXecute-Only Memory

Like SCS, eXecute-Only Memory (XOM) aims at making common exploitation techniques more expensive. It does so by strengthening the protections already provided by address space layout randomization (ASLR) which in turn makes code reuse attacks more difficult by requiring attackers to first leak the location of the code they intend to reuse. This often means that an attacker now needs two vulnerabilities, a read primitive and a write primitive, where previously just a write primitive was necessary in order to achieve their goals. XOM protects against leaks (memory disclosures of code segments) by making code unreadable. Attempts to read execute-only code results in the process aborting safely.

Tombstone from a XOM abort

Starting in Android Q, platform-provided AArch64 code segments in binaries and libraries are loaded as execute-only. Not all devices will immediately receive the benefit as this enforcement has hardware dependencies (ARMv8.2+) and kernel dependencies (Linux 4.9+, CONFIG_ARM64_UAO). For apps with a targetSdkVersion lower than Q, Android’s zygote process will relax the protection in order to avoid potential app breakage, but 64 bit system processes (for example, mediaextractor, init, vold, etc.) are protected. XOM protections are applied at compile-time and have no memory or CPU overhead.

Scudo Hardened Allocator

Scudo is a dynamic heap allocator designed to be resilient against heap related vulnerabilities such as:

Scudo does not prevent exploitation but rather proactively manages memory in a way to make exploitation more difficult. It is configurable on a per-process basis depending on performance requirements. Scudo is enabled in extractors and codecs in the media frameworks.

Tombstone from Scudo aborts

Contributing security improvements to Open Source

AOSP makes use of a number of Open Source Projects to build and secure Android. Google is actively contributing back to these projects in a number of security critical areas:

Thank you to Ivan Lozano, Kevin Deus, Kostya Kortchinsky, Kostya Serebryany, and Mike Logan for their contributions to this post.

May 9th 2019, 11:32 am

What’s New in Android Q Security

Android Developers Blog

Posted by Rene Mayrhofer and Xiaowen Xin, Android Security & Privacy Team

With every new version of Android, one of our top priorities is raising the bar for security. Over the last few years, these improvements have led to measurable progress across the ecosystem, and 2018 was no different.

In the 4th quarter of 2018, we had 84% more devices receiving a security update than in the same quarter the prior year. At the same time, no critical security vulnerabilities affecting the Android platform were publicly disclosed without a security update or mitigation available in 2018, and we saw a 20% year-over-year decline in the proportion of devices that installed a Potentially Harmful App. In the spirit of transparency, we released this data and more in our Android Security & Privacy 2018 Year In Review.

But now you may be asking, what’s next?

Today at Google I/O we lifted the curtain on all the new security features being integrated into Android Q. We plan to go deeper on each feature in the coming weeks and months, but first wanted to share a quick summary of all the security goodness we’re adding to the platform.

Encryption

Storage encryption is one of the most fundamental (and effective) security technologies, but current encryption standards require devices have cryptographic acceleration hardware. Because of this requirement many devices are not capable of using storage encryption. The launch of Adiantum changes that in the Android Q release. We announced Adiantum in February. Adiantum is designed to run efficiently without specialized hardware, and can work across everything from smart watches to internet-connected medical devices.

Our commitment to the importance of encryption continues with the Android Q release. All compatible Android devices newly launching with Android Q are required to encrypt user data, with no exceptions. This includes phones, tablets, televisions, and automotive devices. This will ensure the next generation of devices are more secure than their predecessors, and allow the next billion people coming online for the first time to do so safely.

However, storage encryption is just one half of the picture, which is why we are also enabling TLS 1.3 support by default in Android Q. TLS 1.3 is a major revision to the TLS standard finalized by the IETF in August 2018. It is faster, more secure, and more private. TLS 1.3 can often complete the handshake in fewer roundtrips, making the connection time up to 40% faster for those sessions. From a security perspective, TLS 1.3 removes support for weaker cryptographic algorithms, as well as some insecure or obsolete features. It uses a newly-designed handshake which fixes several weaknesses in TLS 1.2. The new protocol is cleaner, less error prone, and more resilient to key compromise. Finally, from a privacy perspective, TLS 1.3 encrypts more of the handshake to better protect the identities of the participating parties.

Platform Hardening

Android utilizes a strategy of defense-in-depth to ensure that individual implementation bugs are insufficient for bypassing our security systems. We apply process isolation, attack surface reduction, architectural decomposition, and exploit mitigations to render vulnerabilities more difficult or impossible to exploit, and to increase the number of vulnerabilities needed by an attacker to achieve their goals.

In Android Q, we have applied these strategies to security critical areas such as media, Bluetooth, and the kernel. We describe these improvements more extensively in a separate blog post, but some highlights include:

Authentication

Android Pie introduced the BiometricPrompt API to help apps utilize biometrics, including face, fingerprint, and iris. Since the launch, we’ve seen a lot of apps embrace the new API, and now with Android Q, we’ve updated the underlying framework with robust support for face and fingerprint. Additionally, we expanded the API to support additional use-cases, including both implicit and explicit authentication.

In the explicit flow, the user must perform an action to proceed, such as tap their finger to the fingerprint sensor. If they’re using face or iris to authenticate, then the user must click an additional button to proceed. The explicit flow is the default flow and should be used for all high-value transactions such as payments.

Implicit flow does not require an additional user action. It is used to provide a lighter-weight, more seamless experience for transactions that are readily and easily reversible, such as sign-in and autofill.

Another handy new feature in BiometricPrompt is the ability to check if a device supports biometric authentication prior to invoking BiometricPrompt. This is useful when the app wants to show an “enable biometric sign-in” or similar item in their sign-in page or in-app settings menu. To support this, we’ve added a new BiometricManager class. You can now call the canAuthenticate() method in it to determine whether the device supports biometric authentication and whether the user is enrolled.

What’s Next?

Beyond Android Q, we are looking to add Electronic ID support for mobile apps, so that your phone can be used as an ID, such as a driver’s license. Apps such as these have a lot of security requirements and involves integration between the client application on the holder’s mobile phone, a reader/verifier device, and issuing authority backend systems used for license issuance, updates, and revocation.

This initiative requires expertise around cryptography and standardization from the ISO and is being led by the Android Security and Privacy team. We will be providing APIs and a reference implementation of HALs for Android devices in order to ensure the platform provides the building blocks for similar security and privacy sensitive applications. You can expect to hear more updates from us on Electronic ID support in the near future.

May 9th 2019, 11:32 am

What’s New with Android Jetpack

Android Developers Blog

Posted by Karen Ng, Group Product Manager and Jisha Abubaker, Product Manager, Android

Last year, we launched Android Jetpack, a collection of software components designed to accelerate Android development and make writing high-quality apps easier. Jetpack was built with you in mind -- to take the hardest, most common developer problems on Android and make your lives easier.

Jetpack has seen incredible adoption and momentum. Today, 80% of the top 1,000 apps in the Play store are using Jetpack. We’ve also heard feedback from so many of you across our early access developer programs and user studies, as well as Reddit, Stack Overflow, and Slack, that has helped shape these APIs. Very humbly, thank you.

What’s New in Jetpack

Today, we are excited to share with you 11 Jetpack libraries that can be used in development now and an early-development, open-source project called Jetpack Compose to simplify UI development.

Now in Alpha

CameraX

We've heard from many of you that developing camera apps or integrating camera functionality within your existing apps is hard. With the new CameraX library, we want to enable you to create great camera-driven experiences in your application without worrying about the underlying device behavior. This API is backwards compatible to Android 5.0 (API 21) or higher, ensuring that the same code works on most devices in the market. While it leverages the capabilities of camera2, it uses a simpler, use case-based approach that is lifecycle-aware eliminating significant amount of boilerplate code vs camera2. Finally, it enables you to access the same functionality as the native camera app on supported devices. These optional Extensions enable features like Portrait, Night, HDR, and Beauty.

LiveData and Lifecycles w/ coroutines

We heard you loud and clear and agree that LiveData must support your common one-shot asynchronous operations. With Lifecycle & LiveData KTX, you can do so with Kotlin coroutines that are lifecycle-aware. Kotlin coroutines have been well received by the developer community for how they simplify the way concurrency is handled within Android apps. We want to simplify it even further and enabling you to use them safely by offering coroutine scopes tied to lifecycles, coroutine dispatchers that are lifecycle-aware, and support for simple asynchronous chains with the new liveData builder.

Benchmark

The Benchmark library provides you a quick way to benchmark your app code, whether it is written in Kotlin, the Java programming language or native code. We use this library to continuously benchmark Jetpack libraries we release to ensure we do not introduce any latency into your code. You can now do the same right within your development environment in Android Studio, easily measuring database queries, view inflation, or a RecyclerView scroll. The library takes care of what is needed to provide reliable and consistent results like handling warm-up periods, removing outliers, and locking CPU clocks.

Security

To maximize security of an application’s data at-rest, the new Security library implements security best practices for you. It provides strong security that balances encryption with performance for consumer apps like banking and chat. It also provides a maximum level of security for apps that require a hardware-backed keystore with user presence and simplifies many operations including key generation and validation.

ViewModel with SavedState

ViewModel provided you an easy way to save your UI data in the event of a configuration change. It did not save your app state in the event of process death, and many of you have been relying on SavedInstanceState alongside ViewModel. With the ViewModel with SavedState module, you can eliminate boilerplate code and gain the benefits of using both ViewModel and SavedState with simple APIs to save and retrieve data right from your ViewModel.

ViewPager2

ViewPager2, the next generation of ViewPager, is now based on RecyclerView and supports vertical scrolling and RTL (Right-to-Left) layouts. It also provides a much easier way to listen for page data changes with registerOnPageChangeCallback.

Now in Beta

ConstraintLayout 2.0

ConstraintLayout 2.0 brings up new optimizations, and new way of customizing layouts, with the addition of helper classes. As part of ConstraintLayout 2.0, MotionLayout provides an easy way to manage motion and widget animation in your applications. You can easily describe transitions between layouts and animation of properties. MotionLayout is fully declarative in XML, allowing you to describe even complex transitions without requiring any code.

Biometrics Prompt

Users are accustomed to biometric credentials on their phones, but if your app requires a biometric login, it is important to make sure that users are provided a consistent and safe way to enter their credentials. The Biometrics library provides a simple system prompt giving the user a trustworthy experience.

Enterprise

With the Jetpack Enterprise library, your managed enterprise apps can send feedback back to Enterprise Mobility Management providers in the form of keyed app states, while taking advantage of backwards compatibility with managed configurations.

Android for Cars

With the Android for Cars libraries, you can provide your users a driver-optimized version of your app that will be automatically installed onto the vehicle’s infotainment system in vehicles equipped with the Android Automotive OS. It also allows your apps to work with the Android Auto app, providing the driver-optimized version anytime on their device.

Now in Stable

And in case you missed it, we announced stable releases of Jetpack WorkManager (background processing) and Jetpack Navigation (in-app navigation) just a few months ago.

Jetpack Compose

Today, we open-sourced an early preview of Jetpack Compose, a new unbundled toolkit designed to simplify UI development by combining a reactive programming model with the conciseness and ease-of-use of Kotlin. We have always done our best work when we did it with you - our developer community. That’s why we decided to develop Jetpack Compose in the open, starting today.

In that vein, we took a step back and chatted with many of you. We heard strong feedback from developers that they like the modern, reactive APIs that Flutter, React Native, Litho, and Vue.js represent. We also heard that developers love Kotlin, with over 53% of professional Android developers using it and with 20% higher language satisfaction ratings than the Java programming language. Kotlin has become the fastest-growing language in terms of number of contributors on GitHub.

So, we decided to invest in the reactive approach to declarative programming and create an easier way to build UIs with Kotlin.

We are building Compose with a few core principles:

A Compose application is made up of composable functions that transform application data into a UI hierarchy. A function is all you need to create a new UI component. To create a composable function just add the @Composable annotation to the function name. Under the hood, Compose uses a custom Kotlin compiler plug-in so when the underlying data changes, the composable functions can be re-invoked to generate an updated UI hierarchy. The simple example below prints a string to the screen.

We know that adopting any new framework is a big change for existing projects and codebases, which is why we’ve designed Compose like all of Jetpack -- with individual components that you can adopt at your own pace and are compatible with existing views.

If you want to learn more about Jetpack Compose or download its source to try it for yourself, check out http://d.android.com/jetpackcompose

We'd love to hear from you as we iterate on this exciting future together. Send us feedback by posting comments below, and please file any bugs you run into on AOSP or directly through the feedback buttons in the Android Studio Jetpack Compose build in AOSP. Since this is an early preview, we do not recommend trying this on any production projects.

Happy Jetpacking!

May 8th 2019, 3:44 pm

I/O 2019: New features to help you develop, release, and grow your business on Google Play

Android Developers Blog

Posted by Kobi Glick, Product Lead, Google Play

Over the last 10 years, we’ve worked together to build an incredible ecosystem with more than 2.5 billion active users in over 190 countries. This would not be possible without you and all the fantastic apps and games you’ve built that entertain, help, and educate people around the world.

Every month, you upload more than 750,000 APKs and app bundles to the Play Console. We’ve been amazed by your enthusiasm, and it’s been our privilege to help you grow your business. This year, we want to help you go even further. So today at Google I/O, we're announcing new tools and features to help you develop, release, and grow your apps and games — many of them based on your feedback and suggestions.

Efficient, modular apps and customizable feature delivery

Last year we introduced Android's new publishing format, the Android App Bundle, and an entirely new dynamic delivery framework on Google Play. There are now over 80,000 apps and games using app bundles in production, with an average size savings of 20%. As a result of those savings, apps have seen up to 11% install uplift. As the future of app delivery, we’re excited to share these latest enhancements to the Android App Bundle.

Dynamic features are out of beta and available to all developers, including these new delivery options:

During our beta program, many developers implemented interesting use cases with dynamic features. Netflix, for example, now delivers their customer support functionality as a dynamic feature to users who visit the support center. By making functionality available only to users who need it, Netflix reported a 33% reduction in app size. You can learn more in the video below.

Seamless internal testing and increased security

We heard you loud and clear: testing bundles is hard. But with the new internal app sharing, you can now share test builds in a matter of seconds. Just upload your app bundle to Google Play and get a download URL to share with your testers. You don’t need to worry about version codes, signing keys, or most other validations that your production releases need to conform to.

In addition to efficiency and modularity, the Android App Bundle also now offers increased security with the launch of app signing key upgrade for new installs. With this feature, you can upgrade the cryptographic strength of your signing key for new installs and their updates on Google Play. Many developers sign their apps with keys generated a long time ago, and this new feature is the only backwards-compatible way to increase their strength.

Easier for users to update

Although auto-updates reach many users, you told us it was still challenging to get some users to update your apps. Now that our new in-app updates API is in general availability, users will be able to update without ever leaving your app. During our early access program, many developers used our API to create a polished upgrade flow, resulting in a median acceptance rate of about 50%.

The API currently supports two flows:

Stronger decision-making with new Google Play Console data

The right data can help you improve your app performance and grow your business. That’s why we’re excited to tell you about new metrics and insights that will help you better measure your app health and analyze your performance.

Making it easier to respond to and improve user reviews

We’re also making big changes to another key source of performance data: your user reviews. Many of you told us that you want a rating that reflects a more current version of your app, not what it was years ago — and we agree. So instead of a lifetime cumulative value, your Google Play Store rating will be recalculated to give more weight to your most recent ratings. Users won’t see the updated rating in the Google Play Store until August, but you can preview your new rating in the Google Play Console today.

Every day, developers respond to more than 100,000 reviews in the Play Console, and when they do, we’ve seen that users update their rating by +0.7 stars on average. So in addition to the ratings change, we're making it easier to respond to reviews with suggested replies. When you go to respond to a user, you’ll see three suggested replies which have been created automatically based on the content of the review. You can choose to send one as-suggested, customize a suggestion for more personalization, or create your own message from scratch. Suggested replies are available in English now with additional languages coming later.

Better Google Play Store listing targeting and customization

Your store listing is where users come to learn more about your app or game and decide whether to install. It’s important real estate, so we’re releasing new features that let you optimize your Google Play Store to address different moments in the user lifecycle.

Learn more about these and other Google Play features at Google I/O. Join us live or watch later on the Android Developers YouTube channel.

You can also take your skills and knowledge to the next level with our e-learning courses on Google Play’s Academy for App Success, and sign up for our newsletter to stay up to date with our latest features and updates.

How useful did you find this blog post?

May 8th 2019, 2:44 pm

Android Studio 3.5 Beta

Android Developers Blog

Posted by Jamal Eason, Product Manager, Android

Android Studio 3.5 Beta is ready to download today. Last year, at Google I/O, we heard from many of you that you wanted us to focus even more on quality and stability over features. Consequently, we kicked off Project Marble, focused on making the fundamental features and flows of the Integrated Development Environment (IDE) rock-solid. Android Studio 3.5 is the culmination of this effort. The results of Project Marble are focused on three core areas: system health, feature polish, and bugs. We are seeking your final round of feedback to make sure we didn't miss a key area that matters to you, so download Android Studio 3.5 on the beta channel today to let us know what you think.

Many times it can be difficult to see the range of changes that go into a quality release. Therefore, this post and our Google I/O talk on What’s New in Android Development Tools walk through a variety of changes in each of the major focus areas of Project Marble within Android Studio 3.5. We are certainly not done improving quality with Android Studio, but with the work and new infrastructure put into Project Marble for long term quality tracking we hope that you are even more productive in developing Android apps.

System Health - Memory

One of the major points of feedback on Android Studio is how slow the IDE runs over time. Many times the reason behind this experience is due to unexpectedly reaching memory pressure or IDE memory leaks. We dug into this area, and as part of Project Marble, we have addressed over 33 impactful memory leaks. To identify leaks, we now measure out-of-memory exceptions on an internal dashboard on an on-going basis for those who opt-in to share data with us which enables us to focus and fix the most impactful issues. Starting with Android Studio 3.5, when the IDE runs out of memory, we capture some high level statistics about the size of the memory heap and dominant objects in the heap. With this data the IDE can do two things: suggest better memory settings and offer to do a deeper memory analysis.

Memory Settings

Memory Usage Report

System Health - Exceptions

We have revamped our exception process backend pipeline. Now with the opt-in data we have earlier signals of common exceptions in aggregate which lets us prioritize and fix issues earlier in the canary release process than before. Moreover, we reduced the amount of times we prompt you for exceptions, since the analytics and opt-in crash reports are now more actionable for our team. The net result is that you should see the blinky red exception report icon in the lower status bar of the IDE less frequently.

Android Studio Exception Bubble

System Health - User Interface Freezes

User Interface (UI) freezes are another common issue we heard from you. In Android Studio 3.5, we extended the infrastructure of the underlying Intellij platform, and now measure UI thread stops that last longer than a few moments. Over time, we will have a bigger picture of the top hit spots to focus our efforts on. For example, during the Project Marble development, we found in our data that XML code editing was notably slower in the IDE. With this data point, we optimized XML typing, and have measurably better performance in Android Studio 3.5. You can see below that editing data binding expressions in XML is faster due to typing latency improvements.

Code Editing Before - Android Studio 3.4 (left) and Code Editing After - Android Studio 3.5 (right)

System Health - Build Speed

We continued our investment in build speed. For those developers with larger projects, it is the number one concern. As we uncovered in our recent Medium blog post on build speed, many elements can affect build performance, sometimes slowing it down more than we can improve. However, during Project Marble, we made speed improvements by adding incremental build support to the top annotation processors including Glide, AndroidX data binding, Dagger, Realm, and Kotlin (KAPT). Incremental support can make a notable impact on build speed. For example, in our preliminary analysis, adding incremental support just for Kotlin has improved submodule non-ABI code changes for the Google I/O schedule app from 9.1 seconds to 3.6 seconds – a 60% improvement. Read more about the performance changes to the build system here.

System Health - IDE Speed

In the past, a pro-tip some developers used to do is to turn off Android Studio plugins such as Android NDK support to improve performance. While there is nothing wrong with disabling plugins to remove extra menus or options that you don't need, we removed some unnecessary performance hotspots for the Android NDK support that impacted overall IDE speed.

System Health - Lint Code Analysis

Android Lint is a code analysis framework in Android Studio that helps identify common programming mistakes. However, we learned from several user reports that Lint could be too slow—especially when running in batch analysis mode on large projects. After some digging, we found and fixed several large memory leaks, leading to a roughly 2x speedup in Lint performance. We also published a profiling tool that can help identify bottlenecks in individual Lint checks. Read more about the analysis and tool here.

System Health - I/O File Access for Windows

Many users of Android Studio use Microsoft Windows. Over time, we received a range of reports from users on this platform that build times and installation speeds were increasingly getting slower. After investigating the problem during Project Marble, we realized that recent anti-virus programs included Android Studio build and installation directories as active scan targets. Since these folders have many small files created and removed over time, the I/O and CPU are partially taxed and consequently impacts the overall build/sync performance of Android Studio.

Google Internal Data, 2.2GHz quad-core Intel Core i7, April 2019

System Health Notification - Anti-virus Check

System Health - Emulator CPU Usage

Many app developers enjoy the fast and responsive emulator which has had dramatic performance improvements in the last few years. However, we heard from you that the Android Emulator seems to take an inordinate amount of CPU cycles and triggers the cooling fans on laptops even when the emulator is idle in the background. After investigation and measurement, we found that Google Play Services and related services were aggressively running in the background because by default the emulator was set to AC charging instead of battery discharging. We switched the default to battery discharging, and background CPU usage declined by more than 3x. This change is just of the many optimizations we made to the Android Emulator during Project Marble. Learn more about the Android Emulator and Project Marble here.

Google Internal Data on Apple MacBook Pro (15” 2016), Emulator: Pixel 3 API 28

Feature Polish - Apply Changes

Being able to quickly edit and see code changes you have made without restarting your app is great for app development. Two years ago, the Instant Run feature was our attempt to enable this flow, but it ultimately fell short of expectations. During the Project Marble time period, we re-architectured and implemented from the ground-up a more practical approach in Android Studio 3.5 called Apply Changes. Apply Changes uses platform-specific APIs from Android Oreo and higher to ensure reliable and consistent behavior; unlike Instant Run, Apply Changes does not modify your APK. To support the changes, we re-architected the entire deployment pipeline to improve deployment speed, and also tweaked the run and deployment toolbar buttons for a more streamlined experience. Learn more about the architecture behind Apply Changes here.

Apply Changes Buttons

Feature Polish - Gradle Sync

A recent and annoying pain point in Android Studio is to have your project unexpectedly trigger red symbols across your app code, especially when re-opening your project. The Gradle build system retains a cache of all the dependencies in your home directory that allows the IDE to quickly sync without re-downloading new artifacts. The root cause for many of the recent incidents of red symbols appearing is that in a recent Gradle change, these caches were periodically deleted to save hard drive space. The IDE was unaware of the discrepancy and consequently generated red symbols for missing dependencies. Starting with Android Studio 3.5, we now have the conditional logic to check for this state. We certainly have more we can do in this area, but this is just one example of the types of issues we addressed for project sync during Project Marble.

Feature Polish - Project Upgrades

Ideally, the Android Studio team would like you to be on the latest version of the IDE since this is where the team does active feature development, bug fixing and performance improvements. We know that upgrading your Android Studio is not a seamless process as it should be with many issues revolving around fixing gradle plugin errors. With Android Studio 3.5, we have updated the user experience on output windows, pop-ups and dialog boxes to help clarify when you actually need to upgrade, plus we made more sync & build upgrade errors more actionable.

From a recent developer survey, we heard that many developers upgrade the Android Studio IDE and the Gradle plugin at the same time. As of the last several releases, the IDE and your gradle plugin can actually be updated independently. This means if you want the latest build system speed and correctness improvements, you can upgrade your Gradle plugin, but you can also wait until you're ready. Whether or not you upgrade you Gradle plugin at the same time as the IDE, we encourage you to be on the latest release of Android Studio 3.5 to start using all the enhancements from Project Marble.

Feature Polish - Layout Editor

Based on user research on the layout editor and input from you, we know that there are several performance and error-prone usability issues that make editing XML the only path forward, especially when working with ConstraintLayout. To address the general usability of the layout editor, we refined a wide range of interactions from constraint selection and deletion, to better device preview resizing. While XML code editing is still a click away, we hope you can see that these interaction refinements can be a big productivity boost when creating and editing layouts in Android Studio. Learn more about the full range of layout editor changes here.

Layout Editor Before - Android Studio 3.4 (left) and Layout Editor After - Android Studio 3.5 (right)

Feature Polish - Data Binding

During Project Marble, we also took a look at long standing issues with data binding. From a performance perspective, we found that creating data binding expressions in XML files would lead to severe hangs in the code editor. After fixing this issue we also improved code completion, navigation, and refactoring.

Feature Polish - App Deployment Flow

We streamlined the deployment flow during Project Marble, by adding a new dropdown to easily see and change the device you intend to deploy to and a new menu item to deploy to multiple devices.

App Deployment User Flow

Feature Polish - C++ Improvements

C++ project support was also a focus area during Project Marble. CMake builds are now up to 25% faster for large projects because the IDE now invokes parallel Ninja targets. Additionally, you will find an improved single build variant user interface panel that allows you to specify ABI targets separately.. And lastly, Android Studio 3.5 allows you to use multiple versions of the Android NDK side-by-side in your build.gradle file. This should allow you to have more reproducible builds and mitigate incompatibilities between NDK versions and the Android gradle plugin.

Single Variant Selection by ABI

Feature Polish - Intellij Platform Update

This release of the Android Studio includes the features and quality enhancements of the 2019.1 Intellij platform release. The 2019.1 Intellij updates has a range of improvements from custom themes to better version control system integration.

Feature Polish - Conditional Delivery for Dynamic Feature Support

Android Studio 3.5 enhances app bundle feature support with the addition of conditional delivery features for your app bundle. Conditional delivery allows you to set certain device configuration requirements for dynamic feature modules to be downloaded automatically during app install. You can set conditional delivery based on hardware features such as OpenGL versions, support for Augmented Reality, or you set conditions based on API level and user country.

Module Selection for Conditional Delivery

Feature Polish - Emulator Foldables & Pixel Device Support

This release of the IDE includes the Android Emulator skins for Pixel 3a and Pixel 3a XL. Additionally, the Android Studio supports the creation of foldable Android Virtual Devices.

Android Emulator - Foldable Support

Feature Polish - Chrome OS Support

Android Studio 3.5 is now officially supported on Chrome OS 75 and higher on high-end x86 based Chromebooks. During Project Marble we refined a few usability issues, and now have an installer for Android Studio and support app deployment to external USB connected Android devices. Learn more how to setup the IDE on Chrome OS here.

Android Studio on Chrome OS

To recap, Android Studio 3.5 has hundreds of bug fixes and notable changes in these core areas:

System Health

Feature Polish

Check our the Android Studio preview release notes page for more details and read about deep dives into several areas of Project Marble in the following Medium blog posts:

Opt-In & Feedback

The specific areas and the approach we took to optimize Android Studio for Project Marble were all based on your feedback and metrics data. The aggregate metrics you can opt-in to inside of Android Studio allow us to figure out if there are broader problems in the product for all users, and the data also allows the team to prioritize feature work appropriately. There are are a couple pathways to help us build better insights. At a baseline, you can opt-in to metrics, by going to Preferences /Settings → Appearance & Behavior → Data Sharing.

IDE Data Sharing

Additionally, throughout the year, you might see user sentiment emojis in the bottom corner of the IDE. Those icons are a lightweight way to inform the Android Studio team on how things are going and to give us in-context feedback, and the fastest way to log a bug and send to the team.

IDE User Feedback

Getting Started

Download

Download the beta version of Android Studio 3.5 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio. If you want to maintain a stable version of Android Studio, you can run the stable release version and beta release versions of Android Studio at the same time. Learn more.

To use the mentioned Android Emulator features make sure you are running at least Android Emulator v29.0.6 downloaded via the Android Studio SDK Manager.

As mentioned above, we appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

May 8th 2019, 1:44 pm

Fresher OS with Projects Treble and Mainline

Android Developers Blog

Posted by Anwar Ghuloum, Engineering Director and Maya Ben Ari, Product Manager, Android

With each new OS release, we are making efforts to deliver the latest OS improvements to more Android devices.

Thanks to Project Treble and our continuous collaboration with silicon manufacturers and OEM partners, we have improved the overall quality of the ecosystem and accelerated Android 9 Pie OS adoption by 2.5x compared to Android Oreo. Moreover, Android security updates continue to reach more users, with an 84% increase in devices receiving security updates in Q4, when compared to a year before.

This year, we have increased our overall beta program reach to 15 devices, in addition to Pixel, Pixel 2 and Pixel 3/3a running Android Q beta: Huawei Mate 20 Pro, LGE G8, Sony Xperia XZ3, OPPO Reno, Vivo X27, Vivo NEX S, Vivo NEX A, OnePlus 6T, Xiaomi Mi Mix 3 5G, Xiaomi Mi 9, Realme 3 Pro, Asus Zenfone 5z, Nokia 8.1, Tecno Spark 3 Pro, and Essential PH-1.

But our work hasn’t stopped there. We are continuing to invest in efforts to make Android updates available across the ecosystem.

Safer and more secure devices with Project Mainline

Project Mainline builds on our investment in Treble to simplify and expedite how we deliver updates to the Android ecosystem. Project Mainline enables us to update core OS components in a way that's similar to the way we update apps: through Google Play. With this approach we can deliver selected AOSP components faster, and for a longer period of time – without needing a full OTA update from your phone manufacturer. Mainline components are still open sourced. We are closely collaborating with our partners for code contribution and for testing, e.g., for the initial set of Mainline components our partners contributed many changes and collaborated with us to ensure they ran well on their devices.

As a result, we can accelerate the delivery of security fixes, privacy enhancements, and consistency improvements across the ecosystem.

Security: With Project Mainline, we can deliver faster security fixes for critical security bugs. For example, by modularizing media components, which accounted for nearly 40% of recently patched vulnerabilities, and by allowing us to update Conscrypt, the Java Security Provider, Project Mainline will make your device safer.

Privacy: Privacy has been a major focus for us, and we are putting a lot of effort into better protecting users’ data and increasing privacy standards. With Project Mainline, we have the ability to make improvements to our permissions systems to safeguard user data.

Consistency: Project Mainline helps us quickly address issues affecting device stability, compatibility, and developer consistency. We are standardizing time-zone data across devices. Also, we are delivering a new OpenGL driver implementation, ANGLE, designed to help decrease device-specific issues encountered by game developers.

Our initial set of components supported on devices launching on Android Q:

How does this work?

Mainline components are delivered as either APK or APEX files. APEX is a new file format we developed, similar to APK but with the fundamental difference that APEX is loaded much earlier in the booting process. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update. To ensure updates are delivered safely, we also built new failsafe mechanisms and enhanced test processes. We are also closely collaborating with our partners to ensure devices are thoroughly tested.

Project Mainline enables us to keep the OS on devices fresher, improve consistency, and bring the latest AOSP code to users faster. Users will get these critical fixes and enhancements without having to take a full operating system update. We look forward to extending the program with our OEM partners through our joint work on mainline AOSP.

May 8th 2019, 1:14 pm

Google I/O 2019: Empowering developers to build the best experiences on Android + Play

Android Developers Blog

Posted by Chet Haase

It's great to be in our backyard again for Google I/O to connect with Android’s developers around the world. The 7,200 attendees at Shoreline Amphitheatre, millions of viewers on the livestream, and thousand of developers at local I/O Extended events across 80+ countries heard about our efforts to make the lives of developers easier. Today at Google I/O, we talked about two big themes; helping our developers become more productive and strengthening user privacy and security in the platform. Let's take a closer look at the major developer news at I/O so far:

Developer Productivity

This year, we focused on a simple idea - we want to save you time every today. By making everything you use even better.

Kotlin

Two years ago, we announced Kotlin was a supported language for Android. Our top developers loved it already, and since then, it’s amazing how fast it’s grown. Over 50% of professional Android developers now use Kotlin, it’s been one of the most-loved languages two years running on Stack Overflow, and one of the fastest-growing on GitHub in number of contributors.

Today we’re announcing another big step: Android development will become increasingly Kotlin-first. Many new Jetpack APIs and features will be offered first in Kotlin. If you’re starting a new project, you should write it in Kotlin; code written in Kotlin often mean much less code for you–less code to type, test, and maintain. And, in partnership with Jetbrains and the Kotlin Foundation, we’re continuing to invest in tooling, docs, trainings and events to make Kotlin even easier to learn and use. This includes Kotlin/Everywhere, a new, global series of events where you can learn more about the language, new Udacity courses, and more.

Android Jetpack

Last year, we announced Android Jetpack, Android’s API to accelerate Android development and make writing high-quality apps easy, with less code. Over 80% of our top 1000 apps are already using Jetpack, as we continue to simplify more every-day developer challenges. Today, we are releasing 6 new Jetpack libraries (in alpha), and bringing 5 libraries to beta quality. Here are 3 highlights:

Android Studio

Today we’re releasing Android Studio 3.5 to Beta. For months, the team has been exclusively focused on refining and polishing day-to-day development workflows, with Project Marble. Android Studio 3.5 includes better IDE memory management for large projects, lower typing latency, lint improvements, CPU usage optimizations, layout editor improvements, emulator improvements, build changes, as well as a complete rewrite of Instant Run, now called Apply Changes, that now reliably accelerates the ability to see your code changes on a device - plus over 400 high- priority bug fixes.

Machine Learning at Android scale

In Android Q, we’ve made significant improvements to Android’s Neural Networks API (NNAPI). First, we have increased the number of Operators supported from 38 to over 90. The vast majority of models can now be accelerated by NNAPI with no alterations. We’ve also introduced an introspection API for advanced users, allowing full control over which hardware components handle acceleration (e.g. DSP vs. NPU). And, we’ve worked closely with hardware vendors to deliver significant improvements in performance, both in latency and power consumption. Working with MediaTek, we were able to accelerate ML Kit’s face detection API by 9X on the Helio P90. Working with Qualcomm, we were able to accelerate Google’s Lens OCR on the Snapdragon 855’s AI Engine, increasing speed by 3X while also reducing power consumption by 3.7X.

Dynamic Updates and Inline Updates

Last year we introduced the Android App Bundle to help you reduce app size and increase installs. Since then, we’ve seen over 80,000 app bundles in production, with average size savings of 20%. And today we have a number of announcements to help you reduce size and deliver updates to your users even faster. Today we’re glad to share that dynamic feature modules are moving from beta to stable. With dynamic feature modules, you can reduce your app size even more by choosing which parts of your app to deliver - based on conditions like device features, country. You can even deliver modules on-demand, instead of at install time. And today we’re also moving in-app updates from beta to stable. The ability to dynamically update apps is something you’ve been requesting for a long time. Let’s say you have a crucial bug in your app, and you need to push it out right away; you don’t want to wait until users discover an update in the Play Store. Now you can.

User privacy and security in Android Q

As a developer community, we all care about getting this right. It’s about building a platform that offers powerful capabilities for developers, while making sure that user safety and privacy is protected. We introduced Android Q Beta a few months ago with over 50 features and improvements around user privacy and security. These Q changes provide users more transparency and control.

As always, we are working hard to do everything we can for developers adopting the new release. We know you have your own features to build. That’s why, with these Q changes, we’ve worked very hard to minimize the impact for you, as well as to incorporate your feedback. We’ve given as long a notice period as possible, as well as complete and detailed technical information up front, to make it as easy as possible to adopt. We also want to thank the community for your ongoing feedback. It’s been a huge help to the team who are working hard to get this right. A great example are the Beta 3 storage changes, where your feedback helped us evolve the feature over the course of the Betas. Android has a longstanding commitment to minimizing all breaking changes. Our commitment is unchanged, and we’ll work hard to keep Android the open, flexible, and developer friendly platform we all love.

Be a part of Google I/O!

We’ve got a lot of great content in store for you over the next three days, including over 45 sessions across Android. We’re excited for you to join us in-person here at Shoreline, at an I/O Extended event, or online through the livestream. We’re constantly investing in our platform that connects developers to billions of users around the world. To the entire Android community, thank you for your continued support and feedback, and for being a part of Android.

May 7th 2019, 4:15 pm

What’s New in Android: Q Beta 3 & More

Android Developers Blog

Posted by Dave Burke, VP, Engineering

Today Android is celebrating two amazing milestones. It’s Android’s version 10! And today, Android is running on more than 2.5B active Android devices.

With Android Q, we’ve focused on three themes: innovation, security and privacy, and digital wellbeing. We want to help you take advantage of the latest new technology -- 5G, foldables, edge-to-edge screens, on-device AI, and more -- while making sure users' security, privacy, and wellbeing are always a top priority.

Earlier at Google I/O we highlighted what’s new in Android Q and unveiled the latest update, Android Q Beta 3. Your feedback continues to be extremely valuable in shaping today’s update as well as our final release to the ecosystem in the fall.

This year, Android Q Beta 3 is available on 15 partner devices from 12 OEMs -- that’s twice as many devices as last year! It’s all thanks to Project Treble and especially to our partners who are committed to accelerating updates to Android users globally -- Huawei, Xiaomi, Nokia, Sony, Vivo, OPPO, OnePlus, ASUS, LGE, TECNO, Essential, and realme.

Visit android.com/beta to see the full list of Beta devices and learn how to get today’s update on your device. If you have a Pixel device, you can enroll here to get Beta 3 -- if you’re already enrolled, watch for the update coming soon. To get started developing with Android Q Beta, visit developer.android.com/preview.

Privacy and security

As we talked about at Google I/O, privacy and security are important to our whole company and in Android Q we’ve added many more protections for users.

Privacy

In Android Q, privacy has been a central focus, from strengthening protections in the platform to designing new features with privacy in mind. It’s more important than ever to give users control -- and transparency -- over how information is collected and used by apps, and by our phones.

Building on our work in previous releases, Android Q includes extensive changes across the platform to improve privacy and give users control -- from improved system UI to stricter permissions to restrictions on what data apps can use.

For example, Android Q gives users more control over when apps can get location. Apps still ask the user for permission, but now in Android Q the user has greater choice over when to allow access to location -- such as only while the app is in use, all the time, or never. Read the developer guide for details on how to adapt your app for the new location controls.

Outside of location, we also introduced the Scoped Storage feature to give users control over files and prevent apps from accessing sensitive user or app data. Your feedback has helped us refine this feature, and we recently announced several changes to make it easier to support. These are now available in Beta 3.

Another important change is restricting app launches from the background, which prevents apps from unexpectedly jumping into the foreground and taking over focus. In Beta 3 we’re transitioning from toast warnings to actually blocking these launches.

To prevent tracking we're limiting access to non-resettable device identifiers, including device IMEI, serial number, and similar identifiers. Read the best practices to choose the right identifiers for your use case. We're also randomizing MAC address when your device is connected to different Wi-Fi networks and gating connectivity APIs behind the location permission. We’re bringing these changes to you early, so you can have as much time as possible to prepare your apps.

Security

To keep users secure, we’ve extended our BiometricPrompt authentication framework to support biometrics at a system level. We're extending support for passive authentication methods such as face, and we’ve added implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction. The new implicit flow is designed for a lighter-weight alternative for transactions with passive authentication, and there’s no need for users to explicitly confirm.

Android Q also adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections made through Android’s TLS stack, called Conscrypt, regardless of target API level. See the docs for details.

Project Mainline

Today we also announced Project Mainline, a new approach to keeping Android users secure and their devices up-to-date with important code changes, direct from Google Play. With Project Mainline, we’re now able to update specific internal components within the OS itself, without requiring a full system update from your device manufacturer. This means we can help keep the OS code on devices fresher, drive a new level of consistency, and bring the latest AOSP code to users faster -- and for a longer period of time.

We plan to update Project Mainline modules in much the same way as app updates are delivered today -- downloading the latest versions from Google Play in the background and loading them the next time the phone starts up. The source code for the modules will continue to live in the Android Open Source Project, and updates will be fully open-sourced as they are released. Also, because they’re open source, they’ll include improvements and bug fixes contributed by our many partners and developer community worldwide.

For users, the benefits are huge, since their devices will always be running the latest versions of the modules, including the latest updates for security, privacy, and consistency. For device makers, carriers, and enterprises, the benefits are also huge, since they can optimize and secure key parts of the OS without the cost of a full system update.

For app and game developers, we expect Project Mainline to help drive consistency of platform implementation in key areas across devices, over time bringing greater uniformity that will reduce development and testing costs and help to make sure your apps work as expected. All devices running Android Q or later will be able to get Project Mainline, and we’re working closely with our partners to make sure their devices are ready.

Innovation and new experiences

Android is shaping the leading edge of innovation. With our ecosystem partners, we’re enabling new experiences through a combination of hardware and software advances.

Foldables

This year, display technology will take a big leap with foldable devices coming to the Android ecosystem from several top device makers. When folded these devices work like a phone, then you unfold a beautiful tablet-sized screen.

We’ve optimized Android Q to ensure that screen continuity is seamless in these transitions, and apps and games can pick up right where they left off. For multitasking, we’ve made some changes to onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on large screens.

Our partners have already started showing their innovative foldable devices, with more to come. You can get started building and testing today with our foldables emulator in canary release of Android Studio 3.5.

5G networks

5G networks are the next evolution of wireless technology -- providing consistently faster speeds and lower latency. For developers, 5G can unlock new kinds of experiences in your apps and supercharge existing ones.

Android Q adds platform support for 5G and extends existing APIs to help you transform your apps for 5G. You can use connectivity APIs to detect if the device has a high bandwidth connection and check whether the connection is metered. With these your apps and games can tailor rich, immersive experiences to users over 5G.

With Android’s open ecosystem and range of partners, we expect the Android ecosystem to scale to support 5G quickly. This year, over a dozen Android device makers are launching 5G-ready devices, and more than 20 carriers will launch 5G networks around the world, with some already broad-scale.

Live Caption

On top of hardware innovation, we’re continuing to see Android’s AI transforming the OS itself to make it smarter and easier to use, for a wider range of people. A great example is Live Caption, a new feature in Android Q that automatically captions media playing on your phone.

Many people watch videos with captions on -- the captions help them keep up, even when on the go or in a crowded place. But for 466 million Deaf and Hard of Hearing people around the world, captions are more than a convenience -- they make content accessible. We worked with the Deaf community to develop Live Caption.

Live Caption brings real-time captions to media on your phone - videos, podcasts, and audio messages, across any app—even stuff you record yourself. Best of all, it doesn’t even require a network connection -- everything happens on the device, thanks to a breakthrough in speech recognition that we made earlier this year. The live speech models run right on the phone, and no audio stream ever leaves your device.

For developers, Live Caption expands the audience for your apps and games by making digital media more accessible with a single tap. Live Caption will be available later this year.

Suggested actions in notifications

In Android Pie we introduced smart replies for notifications that let users engage with your apps direct from notifications. We provided the APIs to attach replies and actions, but you needed to build those on your own.

Now in Android Q we want to make smart replies available to all apps right now, without you needing to do anything. Starting in Beta 3, we’re enabling system-provided smart replies and actions that are inserted directly into notifications by default.

You can still supply your own replies and actions if you want -- such as if you are using ML Kit or other ML frameworks. Just opt out of the system-provided replies or actions on a per-notification basis using setAllowGeneratedReplies() and setAllowSystemGeneratedContextualActions().

Android Q suggestions are powered by an on-device ML service built into the platform -- the same service that backs our text classifier entity recognition service. We’ve built it with user privacy in mind, and the ML processing happens completely on the device, not on a backend server.

Because suggested actions are based on the TextClassifier service, they can take advantage of new capabilities we’ve added in Android Q, such as language detection. You can also use TextClassifier APIs directly to generate system-provided notifications and actions, and you can mix those with your own replies and actions as needed.

Dark theme

Many users prefer apps that offer a UI with a dark theme they can switch to when light is low, to reduce eye strain and save battery. Users have also asked for a simple way to enable dark theme everywhere across their devices. Dark theme has been a popular request for a while, and in Android Q, it’s finally here.

Starting in Android Q Beta 3, users can activate a new system-wide dark theme by going to Settings > Display, using the new Quick Settings tile, or turning on Battery Saver. This changes the system UI to dark, and enables the dark theme of apps that support it. Apps can build their own dark themes, or they can opt-in to a new Force Dark feature that lets the OS create a dark version of their existing theme. All you have to do is opt-in by setting android:forceDarkAllowed="true" in your app’s current theme.

You may also want to take complete control over your app’s dark styling, which is why we’ve also been hard at work improving AppCompat’s DayNight feature. By using DayNight, apps can offer a dark theme to all of their users, regardless of what version of Android they’re using on their devices. For more information, see here.

Gestural navigation

Many of the latest Android devices feature beautiful edge-to-edge screens, and users want to take advantage of every bit of them. In Android Q we’re introducing a new fully gestural navigation mode that eliminates the navigation bar area and allows apps and games to use the full screen to deliver their content. It retains the familiar Back, Home, and recents navigation through edge swipes rather than visible buttons.

Users can switch to gestures in Settings > System > Gestures. There are currently two gestures: Swiping up from the bottom of the screen takes the user to the Home screen, holding brings up Recents. Swiping from the screen’s left or right edge triggers the Back action.

To blend seamlessly with gestural navigation, apps should go edge-to-edge, drawing behind the navigation bar to create an immersive experience. To implement this, apps should use the setSystemUiVisibility() API to be laid out fullscreen, and then handle WindowInsets as appropriate to ensure that important pieces of UI are not obscured. More information is here.

Digital wellbeing

Digital wellbeing is another theme of our work on Android -- we want to give users the visibility and tools to find balance with the way they use their phones. Last year we launched Digital Wellbeing with Dashboards, App Timers, Flip to Shush, and Wind Down mode. These tools are really helping. App timers helped users stick to their goals over 90% of the time, and users of Wind Down had a 27% drop in nightly usage.

This year we’re continuing to expand our features to help people find balance with digital devices, adding Focus Mode and Family Link.

Focus Mode

Focus Mode is designed for all those times you’re working or studying, and you want to to focus to get something done. With focus mode, you can pick the apps that you think might distract you and silence them - for example, pausing email and the News while leaving maps and text message apps active. You can then use Quick Tiles to turn on Focus Mode any time you want to focus. Under the covers, these apps will be paused - until you come out of Focus Mode! Focus Mode is coming to Android 9 Pie and Android Q devices this Fall.

Family Link

Family Link is a new set of controls to help parents. Starting in Android Q, Family Link will be built right into the Settings on the device. When you set up a new device for your child, Family Link will help you connect it to you. You’ll be able to set daily screen time limits, see the apps where your child is spending time, review any new apps your child wants to install, and even set a device bedtime so your child can disconnect and get to sleep. And now in Android Q you can also set time limits on specific apps… as well as give your kids Bonus Time if you want them to have just 5 more minutes at bedtime. Family Link is coming to Android P and Q devices this Fall. Make sure to check out the other great wellbeing apps in the recent Google Play awards.

Family link lets parents set device bedtime and even give bonus minutes.

Android Foundations

We’re continuing to extend the foundations of Android with more capabilities to help you build new experiences for your users -- here are just a few.

Improved peer-to-peer and internet connectivity

In Android Q we’ve refactored the Wi-Fi stack to improve privacy and performance, and also to improve common use-cases like managing IoT devices and suggesting internet connections -- without requiring the location permission. The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. The network suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity.

Wi-Fi performance modes

In Android Q apps can now request adaptive Wi-Fi by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases. The platform works with the device firmware to meet the requirement with the lowest power consumption. To use the new performance modes, call WifiManager.WifiLock.createWifiLock().

Full support for Wi-Fi RTT accurate indoor positioning

In Android 9 Pie we introduced RTT APIs for indoor positioning to accurately measure distance to nearby Wi-Fi Access Points (APs) that support the IEEE 802.11mc protocol, based on measuring the round-trip time of Wi-Fi packets. Now in Android Q, we’ve completed our implementation of the 802.11mc standard, adding an API to obtain location information of each AP being ranged, configured by their owner during installation.

Audio playback capture

You saw how Live Caption can take audio from any app and instantly turn it into on-screen captions. It’s a seamless experience that shows how powerful it can be for one app to share its audio stream with another. In Android Q, any app that plays audio can let other apps capture its audio stream using a new API. In addition to enabling captioning and subtitles, the API lets you support popular use-cases like live-streaming games, all without latency impact on the source app or game.

We’ve designed this new capability with privacy and copyright protection in mind, so the ability for an app to capture another app's audio is constrained, giving apps full control over whether their audio streams can be captured. Read more here.

Dynamic depth for photos

Apps can now request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support. Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases. Dynamic Depth is an open format for the ecosystem -- the latest version of the spec is here. We're working with our device-maker partners to make it available across devices running Android Q and later.

With Dynamic Depth image you can offer specialized blurs and bokeh options in your app

New audio and video codecs

Android Q adds support for the open source video codec AV1, which allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus - a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it. The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates.

Vulkan 1.1 and ANGLE

We're continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. We’re working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. For game and graphics developers using OpenGL, we’re also working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we're adding experimental support for ANGLE on top of Vulkan on Android devices. See the docs for details.

Neural Networks API 1.2

In NNAPI 1.2 we've added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models -- such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.

Thermal API

When devices get too warm, they may throttle the CPU and/or GPU, and this can affect apps and games in unexpected ways. Now in Android Q, apps and games can use a thermal API to monitor changes on the device and take action to help restore normal temperature. For example, streaming apps can reduce resolution/bit rate or network traffic, a camera app could disable flash or intensive image enhancement, or a game could reduce frame rate or polygon tesselation. Read more here.

ART optimizations

Android Q introduces several improvements to the ART runtime to help your apps start faster, consume less memory, and run smoother -- without requiring any work from you. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it's run. Cloud-based profiles benefit all apps and they're already available to devices running Android P and higher.

We’re also adding Generational Garbage Collection to ART's Concurrent Copying (CC) Garbage Collector. Generational CC collects young-generation objects separately, incurring much lower cost as compared to full-heap GC. It makes garbage collection more efficient in terms of time and CPU, reduces jank, and helps apps run better on lower-end devices.

More Android Q Beta devices, more Treble momentum than ever

In 2017 we launched Project Treble as part of Android Oreo, with a goal of accelerating OS updates. Treble provides a consistent, testable interface between Android and the underlying device code from device makers and silicon manufacturers, which makes porting a new OS version much simpler and more modular.

In 2018 we worked closely with our partners to bring the first OS updates to their Treble devices. The result: last year at Google I/O we had 8 devices from 7 partners joining our Android P Beta program, together with our Pixel and Pixel 2 devices. Fast forward to today -- we’re seeing updates to Android Pie accelerating strongly, with 2.5 times the footprint compared to Android Oreo's at the same time last year.

This year with Android Q we’re seeing even more momentum, and we have 21 devices from 12 top global partners joining us to release Android Q Beta 3 -- in addition all Pixel devices. We’re also providing Q Beta 3 Generic System Images (GSI), a testing environment for other supported Treble devices. All of these offer the same behaviors, APIs, and features -- giving you an incredible variety of devices for testing your apps, and more ways for you to get an early look at Android Q.

You can see the full list of supported partner and Pixel devices at android.com/beta. Try Android Q Beta on your favorite device today and let us know your feedback!

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Take a look at the API diff report for an overview of what's changed in Beta 3, and see the Android Q Beta API reference for details. Visit the Android Q Beta developer site for more resources, including release notes and how to report issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Beta 3?

It's easy! Just enroll any Pixel device here to get the update over-the-air. If you're already enrolled, you'll receive the update soon, and, no action is needed on your part. Downloadable system images are also available.

You can also get Beta 3 on any of the other devices participating in the Android Q Beta program, from some of our top device maker partners. You can see the full list of supported partner and Pixel devices at android.com/beta. For each device you'll find specs and links to the manufacturer's dedicated site for downloads, support, and to report issues.

For even broader testing on supported devices, you can also get Android GSI images, and if you don’t have a device you can test on the Android Emulator -- just download the latest emulator system images via the SDK Manager in Android Studio.

As always, your input is critical, so please let us know what you think. You can use our hotlists for filing platform issues (including privacy and behavior changes), app compatibility issues, and third-party SDK issues. You've shared great feedback with us so far and we're working to integrate as much of it as possible in the next Beta release.

We're looking forward to seeing your apps on Android Q!

May 7th 2019, 3:28 pm

Developing Apps for Android Automotive OS

Android Developers Blog

Posted by Madan Ankapura, Product Manager, Android and Oscar Wahltinez, Developer Programs Engineer

Google's vision is to bring a safe and seamless connected experience in every car. You can see that vision at work today with Android Auto, which enables millions of users to bring apps they use on their smartphones into cars. As display technologies evolve and cars become more connected, there are even more opportunities for developers to build for innovative car experiences and reach a new audience.

This is why a few years ago we introduced Android Automotive OS, an Android operating system that is familiar to millions of developers, tailored to run in the car. In just a short time, we have seen increasing demand for Android Automotive OS from vehicle manufacturers. Most recently, Polestar announced that they are shipping their first electric vehicle (Polestar 2) running Android Automotive OS, and this is the first of many to come.

Polestar 2 with Android Automotive OS

Starting with media apps

As the first cars hit the road, we have heard loud and clear from developers, users and OEMs that consuming media like music or podcasts is one of the key use cases while driving. This is why today, we are announcing that media app developers will be able to start creating new entertainment experiences for Android Automotive OS and the Polestar 2, starting at Google I/O.

With a variety of screen sizes, input methods, OEM customizations and regional driver safety guidelines, building embedded apps for cars at scale is a complicated process for developers to do on their own. In order to help manage these complexities, we are building on the same Android Auto framework.

Media app user experience in Android Automotive OS

Beyond media, users require the ability to navigate and communicate with others (via calls, messages). With Android Automotive OS and the Google Play Store, we have plans to enable developers to build apps in these areas and beyond.

If you are interested in learning more, watch our Google I/O 2019 Automotive developer session - How to Build Android Apps for Cars - where we walk through details on how to build your media app using the latest Android Studio, which features an Android Automotive OS emulator and templates.

And if you are one of the developers with a Google I/O ticket this year, please come by our Office hours and app reviews hosted by the Android Automotive team, and run through our Automotive OS codelab.

Test your apps with Android Automotive OS reference unit in Codelabs area

Lastly, we have also established the automotive-developers Google Groups community for developers to discuss Android Automotive OS. For questions better suited for StackOverflow Q&A style, you can post there using the tag android-automotive.

See you at Google I/O 2019!

May 1st 2019, 12:03 pm

Android Q Scoped Storage: Best Practices and Updates

Android Developers Blog

Posted by Jeff Sharkey, Software Engineer, and Seb Grubb, Product Manager

Application Sandboxing is a core part of Android’s design, isolating apps from each other. In Android Q, taking the same fundamental principle from Application Sandboxing, we introduced Scoped Storage.

Since the Beta 1 release, you’ve given us a lot of valuable feedback on these changes -- thank you for helping shape Android! Because of your feedback, we've evolved the feature during the course of Android Q Beta. In this post, we'll share options for declaring your app’s support for Scoped Storage on Android Q devices, and best practices for questions we've heard from the community.

Updates to help you adopt Scoped Storage

We expect that Scoped Storage should have minimal impact to apps following current storage best practices. However, we also heard from you that Scoped Storage can be an elaborate change for some apps and you could use more time to assess the impact. Being developers ourselves, we understand you may need some additional time to ensure your app’s compatibility with this change. We want to help.

In the upcoming Beta 3 release, apps that target Android 9 Pie (API level 28) or lower will see no change, by default, to how storage works from previous Android versions. As you update your existing app to work with Scoped Storage, you’ll be able to use a new manifest attribute to enable the new behavior for your app on Android Q devices, even if your app is targeting API level 28 or lower.

The implementation details of these changes will be available with the Beta 3 release, but we wanted to share this update with you early, so you can better prepare your app for Android Q devices. Scoped Storage will be required in next year’s major platform release for all apps, independent of target SDK level, so we recommend you add support to your app well in advance. Please continue letting us know your feedback and how we can better align Scoped Storage with your app’s use cases. You can give us input through this survey, or file bugs and feature requests here.

Best practices for common feedback areas

Your feedback is incredibly valuable and has helped us shape these design decisions. We also want to take a moment to share some best practices for common questions we’ve heard:

We’ve also provided a detailed Scoped Storage developer guide with additional information.

What’s ahead

It’s been amazing to see the community engagement on Android Q Beta so far. As we finalize the release in the next several months, please continue testing and keep the feedback coming. Join us at Google I/O 2019 for more details on Scoped Storage and other Android Q features. We’re giving a ”What’s new on Shared Storage” talk on May 8, and you’ll be able to find the livestream and recorded video on the Google I/O site.

April 25th 2019, 4:30 pm

And the 2019 Google Play Award nominees are...

Android Developers Blog

Posted by Purnima Kochikar, Director, Apps and Games Business Development, Google Play

Drum roll please! To kick off Google I/O this year, the 2019 Google Play Awards will take place on Monday, May 6th. We’re excited to highlight nine categories this year, some familiar and some new, to recognize developers that continue to set the bar for quality apps and games on Google Play.

Building on previous years, we celebrate our fourth year by recognizing some of the best experiences available on Android, with an emphasis on overall quality, strong design, technical performance, and innovation. The nominees were selected by various teams across Google, and meet criteria thresholds covering high star rating, Android vitals, and have had a launch or major update since April 2018.

Congratulations to this year’s nominees below and remember to check them out on Google Play at play.google.com/gpa2019. Stay tuned as we announce the winners of each category at Google I/O.

Standout Well-Being App

Apps empowering people to live the best version of their lives, while demonstrating responsible design and engagement strategies.

Best Accessibility Experience

Apps and games enabling device interaction in an innovative way that serve people with disabilities or special needs.

Best Social Impact

Apps and games that create a positive impact in communities around the world (focusing on health, education, crisis response, refugees, and literacy).

Most Beautiful Game

Games that exemplify artistry or unique visual effects either through creative imagery, and/or utilizing advanced graphics API features.

Best Living Room Experience

Apps that create, enhance, or enable a great living room experience that brings people together.

Most Inventive

Apps and games that display a groundbreaking new use case, like utilize new technologies, cater to a unique audience, or demonstrate an innovative application of mobile technology for users.

Standout Build for Billions Experience

Apps and games with optimized performance, localization and culturalization for emerging markets.

Best Breakthrough App

New apps with excellent overall design, user experience, engagement and retention, and strong growth.

Best Breakthrough Game

New games with excellent overall design, user experience, engagement and retention, and strong growth.

Come back on Monday, May 6th when we announce the winners, and until then, make sure to try out some of these great apps and games on Google Play at play.google.com/gpa2019.

How useful did you find this blog post?

April 25th 2019, 1:01 pm

Android Studio 3.4

Android Developers Blog

Posted by Jamal Eason, Product Manager, Android

After nearly six months of development, Android Studio 3.4 is ready to download today on the stable release channel. This is a milestone release of the Project Marble effort from the Android Studio team. Project Marble is our focus on making the fundamental features and flows of the Integrated Development Environment (IDE) rock-solid. On top of many performance improvements and bug fixes we made in Android Studio 3.4, we are excited to release a small but focused set of new features that address core developer workflows for app building & resource management.

Part of the effort of Project Marble is to address user facing issues in core features in the IDE. At the top of the list of issues for Android Studio 3.4 is an updated Project Structure Dialog (PSD) which is a revamped user interface to manage dependencies in your app project Gradle build files. In another build-related change, R8 replaces Proguard as the default code shrinker and obfuscator. To aid app design, we incorporated your feedback to create a new app resource management tool to bulk import, preview, and manage resources for your project. Lastly, we are shipping an updated Android Emulator that takes less system resources, and also supports the Android Q Beta. Overall, these features are designed to make you more productive in your day-to-day app development workflow.

Alongside the stable release of Android Studio 3.4, we recently published in-depth blogs on how we are investigating & fixing a range of issues under the auspices of Project Marble. You should check them out as you download the latest update to Android Studio:

The development work for Project Marble is still on-going, but Android Studio 3.4 incorporates productivity features and over 300 bug & stability enhancements that you do not want to miss. Watch and read below for some of the notable changes and enhancements that you will find in Android Studio 3.4.

Develop

Resource Manager

Jetpack Import Intentions

Layout Editor Properties Panel

Build

Project Structure Dialogue

Test

Android Emulator - Pixel 3 XL Emulator Skin

To recap, Android Studio 3.4 includes these new enhancements & features:

Develop

Build

Test

Check out the Android Studio release notes, Android Gradle plugin release notes, and the Android Emulator release notes for more details.

Getting Started

Download

Download the latest version of Android Studio 3.4 from the download page. If you are using a previous release of Android Studio, you can simply update to the latest version of Android Studio. If you want to maintain a stable version of Android Studio, you can run the stable release version and canary release versions of Android Studio at the same time. Learn more.

To use the mentioned Android Emulator features make sure you are running at least Android Emulator v28.0.22 downloaded via the Android Studio SDK Manager.

We appreciate any feedback on things you like, and issues or features you would like to see. If you find a bug or issue, feel free to file an issue. Follow us -- the Android Studio development team ‐ on Twitter and on Medium.

April 17th 2019, 2:30 pm

Indie Games Accelerator - Applications open for class of 2019

Android Developers Blog

Posted by Anuj Gulati, Developer Marketing Manager and Sami Kizilbash, Developer Relations Program Manager

Last year we announced the Indie Games Accelerator, a special edition of Launchpad Accelerator, to help top indie game developers from emerging markets achieve their full potential on Google Play. Our team of program mentors had an amazing time coaching some of the best gaming talent from India, Pakistan, and Southeast Asia. We’re very encouraged by the positive feedback we received for the program and are excited to bring it back in 2019.

Applications for the class of 2019 are now open, and we’re happy to announce that we are expanding the program to developers from select countries* in Asia, Middle East, Africa, and Latin America.

Successful participants will be invited to attend two gaming bootcamps, all-expenses-paid at the Google Asia-Pacific office in Singapore, where they will receive personalized mentorship from Google teams and industry experts. Additional benefits include Google hardware, invites to exclusive Google and industry events and more.

Find out more about the program and apply to be a part of it.

* The competition is open to developers from the following countries: Bangladesh, Brunei, Cambodia, India, Indonesia, Laos, Malaysia, Myanmar, Nepal, Pakistan, Philippines, Singapore, Sri Lanka, Thailand, Vietnam, Egypt, Jordan, Kenya, Lebanon, Nigeria, South Africa, Tunisia, Turkey, Argentina, Bolivia, Brazil, Chile, Colombia, Costa Rica, Ecuador, Guatemala, Mexico, Panama, Paraguay, Peru, Uruguay and Venezuela.

How useful did you find this blog post?

April 17th 2019, 2:03 am

Improving the update process with your feedback

Android Developers Blog

Posted by Sameer Samat, VP of Product Management, Android & Google Play

Thank you for all the feedback about updates we’ve been making to Android APIs and Play policies. We’ve heard your requests for improvement as well as some frustration. We want to explain how and why we’re making these changes, and how we are using your feedback to improve the way we roll out these updates and communicate with the developer community.

From the outset, we’ve sought to craft Android as a completely open source operating system. We’ve also worked hard to ensure backwards compatibility and API consistency, out of respect and a desire to make the platform as easy to use as possible. This developer-centric approach and openness have been cornerstones of Android’s philosophy from the beginning. These are not changing.

But as the platform grows and evolves, each decision we make comes with trade-offs. Everyday, billions of people around the world use the apps you’ve built to do incredible things like connect with loved ones, manage finances or communicate with doctors. Users want more control and transparency over how their personal information is being used by applications, and expect Android, as the platform, to do more to provide that control and transparency. This responsibility to users is something we have always taken seriously, and that’s why we are taking a comprehensive look at how our platform and policies reflect that commitment.

Taking a closer look at permissions

Earlier this year, we introduced Android Q Beta with dozens of features and improvements that provide users with more transparency and control, further securing their personal data. Along with the system-level changes introduced in Q, we’re also reviewing and refining our Play Developer policies to further enhance user privacy. For years, we’ve required developers to disclose the collection and use of personal data so users can understand how their information is being used, and to only use the permissions that are really needed to deliver the features and services of the app. As part of Project Strobe, which we announced last October, we are rolling out specific guidance for each of the Android runtime permissions, and we are holding apps developed by Google to the same standard.

We started with changes to SMS and Call Log permissions late last year. To better protect sensitive user data available through these permissions, we restricted access to select use cases, such as when an app has been chosen by the user to be their default text message app. We understood that some app features using this data would no longer be allowed -- including features that many users found valuable -- and worked with you on alternatives where possible. As a result, today, the number of apps with access to this sensitive information has decreased by more than 98%. The vast majority of these were able to switch to an alternative or eliminate minor functionality.

Learning from developer feedback

While these changes are critical to help strengthen privacy protections for our users, we’re sensitive that evolving the platform can lead to substantial work for developers. We have a responsibility to make sure you have the details and resources you need to understand and implement changes, and we know there is room for improvement there. For example, when we began enforcing these new SMS and Call Log policies, many of you expressed frustration about the decision making process. There were a number of common themes that we wanted to share:

In response, we are improving and clarifying the process, including:

Evaluating developer accounts

We have also heard concerns from some developers whose accounts have been blocked from distributing apps through Google Play. While the vast majority of developers on Android are well-meaning, some accounts are suspended for serious, repeated violation of policies that protect our shared users. Bad-faith developers often try to get around this by opening new accounts or using other developers’ existing accounts to publish unsafe apps. While we strive for openness wherever possible, in order to prevent bad-faith developers from gaming our systems and putting our users at risk in the process, we can’t always share the reasons we’ve concluded that one account is related to another.

While 99%+ of these suspension decisions are correct, we are also very sensitive to how impactful it can be if your account has been disabled in error. You can immediately appeal any enforcement, and each appeal is carefully reviewed by a person on our team. During the appeals process, we will reinstate your account if we discover that an error has been made.

Separately, we will soon be taking more time (days, not weeks) to review apps by developers that don’t yet have a track record with us. This will allow us to do more thorough checks before approving apps to go live in the store and will help us make even fewer inaccurate decisions on developer accounts.

Thank you for your ongoing partnership and for continuing to make Android an incredibly helpful platform for billions of people around the world.

How useful did you find this blog post?

April 16th 2019, 10:47 pm

Optimize your subscriptions with new insights in the Play Console

Android Developers Blog

Posted by Daniel Schramm, Product Manager, Google Play

Since launching on Google Play nearly 7 years ago, subscriptions have proven to be an essential element in creating sustainable mobile app businesses; 89 of the top 100 highest grossing apps on Google Play in the US now provide subscription products. As the market matures, it is becoming increasingly important for subscription developers to optimize both subscriber conversion and retention in order to maintain growth. To help you do that, we're rolling out new insights available directly in the Play Console.

Subscription retention report

Example subscription retention report data in the Play Console. Source: Google Internal Data.

The recently updated subscription retention report shows how well you are retaining subscribers, along with how well subscribers convert from free trial, introductory price, and first to second payment.

You can configure two cohorts based on SKU, country, and subscription start date. This is particularly useful for evaluating the success of A/B tests; for example, to determine if changing the duration of a free trial has an impact on free trial conversion.

Example free trial conversion data in the Play Console. Source: Google Internal Data.

Cancellation survey results

Retaining your existing subscribers is just as important as acquiring new subscribers, so we have updated the subscription cancellations report to give more insight into voluntary and involuntary cancellations.

The launch of the subscriptions center last year introduced a cancellation survey allowing users to give developers feedback as to why they were cancelling, with results available through the Google Play Developer API. To make these results easier to access and monitor, we now surface daily aggregates directly within the Play Console, along with the ability to download written responses in a CSV.

Example cancellation survey responses in the Play Console. Source: Google Internal Data.

Recover more users

Involuntary cancellations, which occur when a user's form of payment fails, account for over a third of all cancellations. The new recovery performance cards in the cancellation report helps you understand how effectively you are recovering users with grace period and account hold, and the day the subscriptions were recovered to help you evaluate the effectiveness of recovery messaging.

Example account hold performance recovery card in the Play Console. Source: Google Internal Data.

Make sure you've set up grace periods and account hold for your apps! We've seen that developers who use both grace period and account hold see more than a 3x increase in decline recovery rate from 10% to 33%. Discover more information on grace period and account hold.

You can find the subscription retention and cancellation reports linked from the bottom of the Subscriptions page, in the Financial reports section of the Play Console. If you don't have access to financial reporting, ask your developer account owner for permission to view financial data.

Example account hold performance recovery card in the Play Console. Source: Google Internal Data.

We hope this new reporting gives you new insights to optimize your subscription business, and we look forward to sharing more with you at Google I/O in May.

How useful did you find this blog post?

April 11th 2019, 1:00 pm

Google Play Instant feature plugin deprecation

Android Developers Blog

Posted by Miguel Montemayor and Diana García Ríos

As of Android Gradle plugin 3.4.0 (included in Android Studio 3.4), we are starting the deprecation process of the feature plugin (com.android.feature) and instant app plugin (com.android.instantapp) as a way to build your instant app. When building your app, you will receive a warning flagging com.android.feature as deprecated. If you have an existing instant app built with the feature plugin, migrate your existing app to an instant-enabled app bundle as soon as possible.

What is changing?

Last year, we introduced Android App Bundles—a new way to build and publish your Android apps. App bundles simplify delivering optimized APKs, including instant delivery, by unifying uploads into a single artifact. Google Play handles distribution by serving the correct APKs to your instant and installed app users—this is called Dynamic Delivery. To learn more about app bundles, visit the documentation site.

Dynamic Delivery is based on the idea of shipping dynamic features (com.android.dynamic-feature) to app users when they need them and only if they need them. There are currently three delivery types, based on the different values you will give the dist:module tag attributes on the dynamic feature module’s manifest file:

    <dist:module
       dist:instant="..."
       dist:onDemand="..."
       ...
    </dist:module>
dist:instant="false" dist:instant="true"
dist:onDemand="false" Dynamic feature delivered at install time Dynamic feature delivered instantly and at install time
dist:onDemand="true" Dynamic feature delivered on demand (beta) N/A

By migrating your instant app to an instant-enabled app bundle with dynamic features, you will be ready to leverage the full power of this new paradigm and you will be able to simplify your app’s modular design.

The migration

Previously, instant apps required creating a feature module that acted as the base module for your app. This base feature module contained the shared code and resources for both your instant and installed application. The rest of your codebase was comprised of:

With the new app bundle implementation, your base feature module takes the role as your app module (com.android.application), hosting the code and resources common to all features (instant and installed). You organize additional, modular features as one of three types of dynamic feature modules, based on when you want to deliver them to the user. The instant app module disappears, since the dist:instant attributes in the manifest are enough to identify which features will be included as part of the instant experience.

If you don’t have an instant experience added to your app and you’d like to create one, use Android Studio 3.3+ to create an instant-enabled app bundle.

April 9th 2019, 2:49 pm

ML Kit expands into NLP with Language Identification and Smart Reply

Android Developers Blog

Posted by Christiaan Prins and Max Gubin

Today we are announcing the release of two new features to ML Kit: Language Identification and Smart Reply.

You might notice that both of these features are different from our existing APIs that were all focused on image/video processing. Our goal with ML Kit is to offer powerful but simple-to-use APIs to leverage the power of ML, independent of the domain. As such, we are excited to expand ML Kit with solutions for Natural Language Processing (NLP)!

NLP is a category of ML that deals with analyzing and generating text, speech, and other kinds of natural language data. We're excited to start out with two APIs: one that helps you identify the language of text, and one that generates reply suggestions in chat applications. Both of these features work fully on-device and are available on the latest version of the ML Kit SDK, on iOS (9.0 and higher) and Android (4.1 and higher).

Generate reply suggestions based on previous messages

A new feature popping up in messaging apps is to provide the user with a selection of suggested responses, either as actions on a notification or inside the app itself. This can really help a user to quickly respond when they are busy or a handy way to initiate a longer message.

With the new Smart Reply API you can now quickly achieve the same in your own apps. The API provides suggestions based on the last 10 messages in a conversation, although it still works if only one previous message is available. It is a stateless API that fully runs on-device, so we don't keep message history in memory nor send it to a server.

textPlus app providing response suggestions using Smart Reply

We have worked closely with partners like textPlus to ensure Smart Reply is ready for prime time and they have now implemented in-app response suggestions with the latest version of their app (screenshot above).

Adding Smart Reply to your own app is done with a simple function call (using Swift in this example):

let smartReply = NaturalLanguage.naturalLanguage().smartReply()
smartReply.suggestReplies(for: conversation) { result, error in
    guard error == nil, let result = result else {
        return
    }
    if (result.status == .success) {
        for suggestion in result.suggestions {
            print("Suggested reply: \(suggestion.text)")
        }
    }
}

After you initialize a Smart Reply instance, call suggestReplies with a list of recent messages. The callback provides the result which contains a list of suggestions.

For details on how to use the Smart Reply API, check out the documentation.

Tell me more ...

Although as a developer, you can just pick up this new API and easily get it integrated in your app, it may be interesting to reveal a bit on how it works under the hood. At the core of Smart Reply is a machine-learned model that is executed using TensorFlow Lite and has a state-of-the-art modern architecture based on SentencePiece text encoding[1] and Transformer[2].

However, as we realized when we started development of the API, the core suggestion model is not all that's needed to provide a solution that developers can use in their apps. For example, we added a model to detect sensitive topics, so that we avoid making suggestions in response to profanity or in cases of personal tragedy/hardship. Also, we included language identification, to ensure we do not provide suggestions for languages the core model is not trained on. The Smart Reply feature is launching with English support first.

Identify the language of a piece of text

The language of a given text string is a subtle but helpful piece of information. A lot of apps have functionality with a dependency on the language: you can think of features like spell checking, text translation or Smart Reply. Rather than asking a user to specify the language they use, you can use our new Language Identification API.

ML Kit recognizes text in 103 different languages and typically only requires a few words to make an accurate determination. It is fast as well, typically providing a response within 1 to 2 ms across iOS and Android phones.

Similar to the Smart Reply API, you can identify the language with a function call (using Swift in this example):

let languageId = NaturalLanguage.naturalLanguage().languageIdentification()
languageId.identifyLanguage(for: "¿Cómo estás?") { languageCode, error in
  guard error == nil, let languageCode = languageCode else {
    print("Failed to identify language with error: \(error!)")
    return
  }

  print("Identified Language: \(languageCode)")
}

The identifyLanguage functions takes a piece of a text and its callback provides a BCP-47 language code. If no language can be confidently recognized, ML Kit returns a code of und for undetermined. The Language Identification API can also provide a list of possible languages and their confidence values.

For details on how to use the Language Identification API, check out the documentation.

Get started today

We're really excited to expand ML Kit to include Natural Language APIs. Give the two new NLP APIs a spin today and let us know what you think! You can always reach us in our Firebase Talk Google Group.

As ML Kit grows we look forward to adding more APIs and categories that enables you to provide smarter experiences for your users. With that, please keep an eye out for some exciting ML Kit announcements at Google I/O.

April 5th 2019, 1:06 pm

Android Q Beta 2 update

Android Developers Blog

Posted by Dave Burke, VP of Engineering

A few weeks ago we Introduced Android Q Beta, a first look at the next version of Android. Along with new privacy features for users, Android Q adds new capabilities for developers - like enhancements for foldables, new APIs for connectivity, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1 graphics, and more.

Android's program of early, open previews is driven by our core philosophy of openness, and collaboration with our community. Your feedback since Beta 1 proves yet again the value of that openness - it's been loud, clear, and incredibly valuable. You've sent us thousands of bug reports, giving us insights and directional feedback, changing our plans in ways that make the platform better for users and developers. We're taking your feedback to heart, so please stay tuned. We're fortunate to have such a passionate community helping to guide Android Q toward the final product later this year.

Today we're releasing Android Q Beta 2 and an updated SDK for developers. It includes the latest bug fixes, optimizations, and API updates for Android Q, along with the April 2019 security patches. You'll also notice isolated storage becoming more prominent as we look for your wider testing and feedback to help us refine that feature.

We're still in early Beta with Android Q so expect rough edges! Before you install, check out the Known Issues. In particular, expect the usual transitional issues with apps that we typically see during early Betas as developers get their app updates ready.

You can get Beta 2 today by enrolling any Pixel device here. If you're already enrolled, watch for the Beta 2 update coming soon. Stay tuned for more at Google I/O in May.

What's new in Beta 2?

Privacy features for testing and feedback

As we shared at Beta 1, we're making significant privacy investments in Android Q in addition to the work we've done in previous releases. Our goals are improving transparency, giving users more control, and further securing personal data across platform and apps. We know that to reach those goals, we need to partner with you, our app developers. We realize that supporting these features is an investment for you too, so we'll do everything we can to minimize the impact on your apps.

For features like Scoped Storage, we're sharing our plans as early as possible to give you more time to test and give us your input. To generate broader feedback, we've also enabled Scoped Storage for new app installs in Beta 2, so you can more easily see what is affected.

With Scoped Storage, apps can use their private sandbox without permission, but they need new permissions to access shared collections for photos, videos and audio. Apps using files in shared collections -- for example, photo and video galleries and pickers, media browsing, and document storage -- may behave differently under Scoped Storage.

We recommend getting started with Scoped Storage soon -- the developer guide has details on how to handle key use-cases. For testing, make sure to enable Scoped Storage for your app using the adb command. If you discover that your app has a use-case that's not supported by Scoped Storage, please let us know by taking this short survey. We appreciate the great feedback you've given us already, it's a big help as we move forward with the development of this feature.

Bubbles: a new way to multitask

In Android Q we're adding platform support for bubbles, a new way for users to multitask and re-engage with your apps. Various apps have already built similar interactions from the ground up, and we're excited to bring the best from those into the platform, while helping to make interactions consistent, safeguard user privacy, reduce development time, and drive innovation.

Bubbles will let users multitask as they move between activities.

Bubbles help users prioritize information and take action deep within another app, while maintaining their current context. They also let users carry an app's functionality around with them as they move between activities on their device.

Bubbles are great for messaging because they let users keep important conversations within easy reach. They also provide a convenient view over ongoing tasks and updates, like phone calls or arrival times. They can provide quick access to portable UI like notes or translations, and can be visual reminders of tasks too.

We've built bubbles on top of Android's notification system to provide a familiar and easy to use API for developers. To send a bubble through a notification you need to add a BubbleMetadata by calling setBubbleMetadata. Within the metadata you can provide the Activity to display as content within the bubble, along with an icon (disabled in beta 2) and associated person.

We're just getting started with bubbles, but please give them a try and let us know what you think. You can find a sample implementation here.

Foldables emulator

As the ecosystem moves quickly toward foldable devices, new use-cases are opening up for your apps to take advantage of these new screens. With Beta 2, you can build for foldable devices through Android Q enhanced platform support, combined with a new foldable device emulator, available as an Android Virtual device in Android Studio 3.5 available in the canary release channel.

7.3" Foldable AVD switches between the folded and unfolded states

On the platform side, we've made a number of improvements in onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on foldable and large screens. You can read more in the foldables developer guide.

To set up a runtime environment for your app, you can now configure a foldable emulator as a virtual device (AVD) in Android Studio. The foldable AVD is a reference device that lets you test with standard hardware configurations, behaviors, and states, as will be used by our device manufacturer partners. To ensure compatibility, the AVD meets CTS/GTS requirements and models CDD compliance. It supports runtime configuration change, multi-resume and the new resizeableActivity behaviors.

Use the canary release of Android Studio 3.5 to create a foldable virtual device to support either of two hardware configurations 7.3" (4.6" folded) and 8" (6.6" folded) with Beta 2. In each configuration, the emulator gives you on-screen controls to trigger fold/unfold, change orientation, and quick actions.

Android Studio - AVD Manager: Foldable Device Setup

Try your app on the foldable emulator today by downloading the canary release of Android Studio 3.5 and setting up a foldable AVD that uses the Android Q Beta 2 system image.

Improved sharesheet

Following on the initial Sharing Shortcuts APIs in Beta 1, you can now offer a preview of the content being shared by providing an EXTRA_TITLE extra in the Intent for the title, or by setting the Intent's ClipData for a thumbnail image. See the updated sample application for the implementation details.

Directional, zoomable microphones

Android Q Beta 2 gives apps more control over audio capture through a new MicrophoneDirection API. You can use the API to specify a preferred direction of the microphone when taking an audio recording. For example, when the user is taking a "selfie" video, you can request the front-facing microphone for audio recording (if it exists) by calling setMicrophoneDirection(MIC_DIRECTION_FRONT).

Additionally, this API introduces a standardized way of controlling zoomable microphones, allowing your app to have control over the recording field dimension using setMicrophoneFieldDimension(float).

Compatibility through public APIs

In Android Q we're continuing our long-term effort to move apps toward only using public APIs. We introduced most of the new restrictions in Beta 1, and we're making a few minor updates to those lists in Beta 2 to minimize impact on apps. Our goal is to provide public alternative APIs for valid use-cases before restricting access, so if an interface that you currently use in Android 9 Pie is now restricted, you should request a new public API for that interface.

Get started with Android Q Beta

Today's update includes Beta 2 system images for all Pixel devices and the Android Emulator, as well updated SDK and tools for developers. These give you everything you need to get started testing your apps on the new platform and build with the latest APIs.

First, make your app compatible and give your users a seamless transition to Android Q, including your users currently participating in the Android Beta program. To get started, just install your current app from Google Play onto a device or emulator running Beta 2 and work through the user flows. The app should run and look great, and handle the Android Q behavior changes for all apps properly. If you find issues, we recommend fixing them in the current app, without changing your targeting level. See the migration guide for steps and a recommended timeline.

With important privacy features that are likely to affect your apps, we recommend getting started with testing right away. In particular, you'll want to test against scoped storage, new location permissions, restrictions on background Activity starts, and restrictions on device identifiers. See the privacy checklist as a starting point.

Next, update your app's targetSdkVersion to 'Q' as soon as possible. This lets you test your app with all of the privacy and security features in Android Q, as well as any other behavior changes for apps targeting Q.

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Take a look at the API diff report for an overview of what's changed in Beta 2, and see the Android Q Beta API reference for details. Visit the Android Q Beta developer site for more resources, including release notes and how to report issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Beta 2?

It's easy - you can enroll here to get Android Q Beta updates over-the-air, on any Pixel device (and this year we're supporting all three generations of Pixel -- Pixel 3, Pixel 2, and even the original Pixel!). If you're already enrolled, you'll receive the update to Beta 2 soon, no action is needed on your part. Downloadable system images are also available. If you don't have a Pixel device, you can use the Android Emulator -- just download the latest emulator system images via the SDK Manager in Android Studio.

As always, your input is critical, so please let us know what you think. You can use our hotlists for filing platform issues (including privacy and behavior changes), app compatibility issues, and third-party SDK issues. You've shared great feedback with us so far and we're working to integrate as much of it as possible in the next Beta release.

April 3rd 2019, 1:16 pm

Improving app performance with ART optimizing profiles in the cloud

Android Developers Blog

Posted by Calin Juravle, Software Engineer

In Android Pie we launched ART optimizing profiles in Play Cloud, a new optimization feature that greatly improves the application startup time after a new install or update. On average, we have observed that apps start 15% faster (cold startup) across a variety of devices. Some hero cases even show 30%+ faster startup times. One of the most important aspects is that users get this for free, without any effort from their side or from developers!

Source: Google internal data

ART optimizing profiles in Play Cloud

The feature builds on previous Profile Guided Optimization (PGO) work, which was introduced in Android 7.0 Nougat. PGO allows the Android Runtime to help improve an app's performance by building a profile of the app's most important hot code and focusing its optimization effort on it. This leads to big improvements while reducing the traditional memory and storage impact of a fully compiled app. However, it relies on the device to optimize apps based on these code profiles in idle maintenance mode, which means it could be a few days before a user sees the benefits - something we aimed to improve.

Source: Google internal data

ART optimizing profiles in Play Cloud leverages the power of Android Play to bring all PGO benefits at install/update time: most users can get great performance without waiting!

The idea relies on two key observations:

  1. Apps usually have many commonly used code paths (hot code) between a multitude of users and devices, e.g. classes used during startup or critical user paths. This can often be discovered by aggregating a few hundred data points.
  2. App developers often roll-out their apps incrementally, starting with alpha/beta channels before expanding to a wider audience. Even if there isn't an alpha/beta set, there is often a ramp-up of users to a new version of an app.

This means we can use the initial rollout of an app to bootstrap the performance for the rest of users. ART analyzes what part of the application code is worth optimizing on the initial devices, and then uploads the data to Play Cloud, which will build a core-aggregated code profile (containing information relevant to all devices). Once there is enough information, the code profile gets published and installed alongside the app's APKs.

On a device, the code profile acts as a seed, enabling efficient profile-guided optimization at install time. These optimizations help improve cold startup time and steady state performance, all without an app developer needing to write a single line of code.

Step 1: Building the code profile

One of the main goals is to build a quality, stable code profile out of aggregated & anonymized data as fast as possible (to maximize the number of users that can benefit), while also making sure we have enough data to accurately optimize an app's performance. Sampling too much data takes up more bandwidth and time at installation. In addition, the longer we take to build the code profile, the fewer users get the benefits. Sampling too little data, and the code profile won't have enough information on what to properly optimize in order to make a difference.

The outcome of the aggregation is what we call a core code profile, which only contains anonymous data about the code that is frequently seen across a random sample of sessions per device. We remove outliers to ensure we focus on the code that matters for most users.

Experiments show that the most commonly used code paths can be calculated very quickly, over a small amount of time. That means we are able to build a code profile fast enough that the majority of users will benefit from.

*Data averaged from Google apps, Source: Google internal data

Step 2: Installing the code profile

In Android 9.0 Pie, we introduced a new type of installation artifact: dex metadata files. Similar to the APKs, the dex metadata files are regular archives that contain data about how the APK should be optimized - like the core code profiles that have been built in the cloud. A key difference is that the dex metadata are managed solely by the platform and the app stores, and are not directly visible to developers.

There is also built-in support for App Bundles / Google Play Dynamic Delivery: without any developer intervention, all the app's feature splits are optimized.

Step 3: Using the code profiles to optimize performance

To understand how these code profiles achieve better performance, we need to look at their structure. Code profiles contain information about:

Using this information, we use a variety of optimization techniques, out of which the following three provide most of the benefits:

Improvements & Observations

We rolled out profiles in the cloud to all apps on the playstore at the end of last year.

A very interesting observation is that, on average, ART profiles about 20% of the application methods (even less if we count the actual size of the code). For some apps, the profile covers only 2% of the code while for some the number goes up to 60%.

Source: Google internal data

Why is this an important observation? It means that the runtime has not seen a lot of the application code, and is thus not investing in the code's optimization. While there are a lot of valid use-cases where the code will not be executed (e.g. error handling or backwards compatibility code), this may also be due to unused features or unnecessary code. The skew distribution is a strong signal that the latter could play an important role in further optimizations (e.g. lowering APK size by removing unneeded dex bytecode).

Future Development

We're excited about the improvements that ART optimizing profiles has shown, and we'll be growing this concept more in the future. Building a profile of code per app opens opportunities for even more application improvements. Data can be used by developers to improve the app based on what's relevant and important for their end users. Using the information collected in Profiles, code can be re-organized or trimmed for better efficiency. Developers can potentially use App Bundles to split their features based on their use and avoid shipping unnecessary code to their users. We've already seen great improvements in app startup time, and hope to see additional benefits coming from profiles to make developer's lives easier while providing better experiences for our users.

April 1st 2019, 1:27 pm

AOSP Application Updates

Android Developers Blog

Posted by Raman Tenneti, AOSP Software Engineer and Ally Sillins, AOSP Program Manager

When we started the Android Open Source Project (AOSP) 10 years ago, we included some basic applications in the AOSP build for three main purposes:

  1. to provide a usable set of applications for someone building an Android device from our AOSP
  2. to serve as a demonstration for the nascent Android app developer community, showcasing how they should build some of these applications
  3. to, as part of the platform, provide functionality to other Android applications that would interact with them through the standard Android APIs like the common intents

However, as the Android ecosystem has matured over time, we've noticed a healthy growth of alternative applications - both as open source and proprietary implementations - developed by the developer community. These alternative applications are not only capable to serve the first two purposes, but often times showcase richer set of features demonstrating the power of Android. Late last year, we began to clean up these applications in AOSP to focus more effectively on the last purpose — their role to provide functionality to other Android applications as part of the platform.

To date, the following 3 apps have been cleaned up: Music, Calendar, and Calculator. See below for details on these updates. Going forward, you can expect to see similar efforts with the other applications in the AOSP repository.

As always, we're excited to hear your feedback on the developer website or through our AOSP forum.

Music Application

AOSP's Music app can now playback music, one file at a time, and exposes itself as an intent handler for the android.media.browse.MediaBrowserService. The app has controls to play and pause, and a slider moving forward and backward. Features removed include: Music Icon, Artists, Albums, Songs, Playlists, Search, and Settings.

Calendar Application

AOSP's Calendar app now exposes itself as an intent handler for the calendar events. New events cannot be created and existing events cannot be edited or deleted. The following features have been deleted: support for multiple accounts, reminders and settings. In addition, some features remain that are not needed for providing a part of the platform functionality: views for day, week, and month. This app may be further simplified in the future.

Calculator App

The calculator application is a standalone app, and does not function as part of the platform and hence has been removed from the AOSP build. However, the application will continue to exist as an open source project separately.

March 28th 2019, 2:06 pm

Changes to the Google Play Developer API

Android Developers Blog

Posted by Vlad Radu, Product Manager and Nicholas Lativy, Software Engineer

The Google Play Developer API allows you to automate your in-app billing and app distribution workflows. At Google I/O '18, we introduced version 3 of the API, which allows you to transactionally start, manage, and halt staged releases on all tracks, through production, open testing, closed testing (including the new additional testing tracks), and internal testing.

Updating from versions 1 and 2 to the latest version 3

In addition to these new features, version 3 also supports all the functionality of previous versions, improving and simplifying how you manage workflows. Starting December 1, 2019, versions 1 and 2 of the Google Play Developer API will no longer be available so you need to update to version 3 ahead of this date.

Migrating to version 3

If you use the Google Play API client libraries (available for Java, Python, and other popular languages), we recommend upgrading to their latest versions, which already support version 3 of the API. In many cases, changing the version of the client library should be all that is necessary. However, you may also need to update specific code references to the version of the API in use - see examples in our samples repository.

Many third-party plugins are already using version 3 of the API. If you use a plugin that does not support version 3 you will need to contact the maintainer. You will start seeing warnings in the Google Play Console in mid-May if we detect that your app is still using version 1 and version 2 endpoints.

For version 1 users

If you currently use version 1 of the API, you may also need to link your API project to the Google Console before converting to version 3. Learn more about this process.

Going forward

We hope you benefit from the new features of the Google Play Developer Publishing API and are looking forward to your continued feedback to help us improve the publishing experience on Google Play.

How useful did you find this blog post?

March 26th 2019, 7:36 pm

The latest Android App Bundle updates including the additional languages API

Android Developers Blog

Posted by Wojtek Kaliciński, Developer Advocate, Android

Last year, we launched Android App Bundles and Google Play's Dynamic Delivery to introduce modular development, reduce app size and streamline the release process. Since then, we've seen developers quickly adopt this new app model in over 60,000 production apps. We've been excited to see developers experience significant app size savings and reductions in the time needed to manage each release, and have documented these benefits in case studies with Duolingo and redBus.

Thank you to everyone who took the time to give us feedback on our initial launch. We're always open to new ideas, and today, we're happy to announce some new improvements based on your suggestions:

Additional languages API

When you adopt the Android App Bundle as the publishing format for your app, Google Play is able to optimize the installation by delivering only the language resources that match the device's system locales. If a user changes the system locale after the app is installed, Play automatically downloads the required resources.

Some developers choose to decouple the app's display language from the system locale by adding an in-app language switcher. With the latest release of the Play Core library (version 1.4.0), we're introducing a new additional languages API that makes it possible to build in-app language pickers while retaining the full benefits of smaller installs provided by using app bundles.

With the additional languages API, apps can now request the Play Store to install resources for a new language configuration on demand and immediately start using it.

Get a list of installed languages

The app can get a list of languages that are already installed using the SplitInstallManager#getInstalledLanguages() method.

val splitInstallManager = SplitInstallManagerFactory.create(context)
val langs: Set<String> = splitInstallManager.installedLanguages

Requesting additional languages

Requesting an additional language is similar to requesting an on demand module. You can do this by specifying a language in the request through SplitInstallRequest.Builder#addLanguage(java.util.Locale).

val installRequestBuilder = SplitInstallRequest.newBuilder()
installRequestBuilder.addLanguage(Locale.forLanguageTag("pl"))
splitInstallManager.startInstall(installRequestBuilder.build())

The app can also monitor install success with callbacks and monitor the download state with a listener, just like when requesting an on demand module.

Remember to handle the SplitInstallSessionStatus.REQUIRES_USER_CONFIRMATION state. Please note that there was an API change in a recent Play Core release, which means you should use the new SplitInstallManager#startConfirmationDialogForResult() together with Activity#onActivityResult(). The previous method of using SplitInstallSessionState#resolutionIntent() with startIntentSender() has been deprecated.

Check out the updated Play Core Library documentation for more information on how to access the newly installed language resources in your activity.

We've also updated our dynamic features sample on GitHub with the additional languages API, including how to store the user's language preference and apply it to your activities at startup.

Please note that while the additional languages API is now available to all developers, on demand modules are in a closed beta for the time being. You can experiment with on demand modules in your internal, open, and closed test tracks, while we work with our partners to make sure this feature is ready for production apps.

Instant-enabled App Bundle

In Android Studio 3.3, we introduced a way to build app bundles that contain both the regular, installed version of your app as well as a Google Play Instant experience for modules marked with the dist:instant="true" attribute in their AndroidManifest.xml:

<manifest ... xmlns:dist="http://schemas.android.com/apk/distribution">
    <dist:module dist:instant="true" />
    ...
</manifest>

Even though you could use a single project to generate the installed and instant versions of your app, up until now, developers were still required to use product flavors in order to build two separate app bundles and upload both to Play.

We're happy to announce that we have now removed this restriction. It's now possible to upload a single, unified app bundle artifact, containing modules enabled for the instant experience. This functionality is now available for everyone.

After you build an instant-enabled app bundle, upload it to any track on the Play Console, and you'll be able to select it when creating a new instant app release. This also means that the installed and instant versions of your app no longer need different version codes, which will simplify the release workflow.

Opt in to app signing by Google Play

You need to enable app signing by Google Play to publish your app using an Android App Bundle and automatically benefit from Dynamic Delivery optimizations. It is also a more secure way to manage your signing key, which we recommend to everyone, even if you want to keep publishing regular APKs for now.

Based on your feedback, we've revamped the sign-up flow for new apps to make it easier to initialize the key you want to use for signing your app.

Now developers can explicitly choose to upload their existing key without needing to upload a self-signed artifact first. You can also choose to start with a key generated by Google Play, so that the key used to locally sign your app bundle can become your upload key.

Read more about the new flow.

Permanent uninstallation of install time modules

We have now added the ability to permanently uninstall dynamic feature modules that are included in your app's initial install.


This is a behavior change, which means you can now call the existing SplitInstallManager#deferredUninstall() API on modules that set onDemand="false". The module will be permanently uninstalled, even when the app is updated.

This opens up new possibilities for developers to further reduce the installed app size. For example, you can now uninstall a heavy sign-up module or any other onboarding content once the user completes it. If the user navigates to a section of your app that has been uninstalled, you can reinstall it using the standard on demand modules install API.

We hope you enjoy these improvements and test them out in your apps. Continue to share your feedback as we work to make these features even more useful for you!

How useful did you find this blog post?

March 21st 2019, 6:13 pm

Google Mobile Developer Day at Game Developers Conference 2019

Android Developers Blog

Posted by Kacey Fahey, Developer Marketing, Google Play & Android

We're excited to host the Google Mobile Developer Day at Game Developers Conference 2019. We are taking this opportunity to share best practices and our plans to help your games businesses, which are fuelling incredible growth in the global mobile games market. According to Newzoo, mobile games revenue is projected to account for nearly 60% of global games revenue by 2021. The drivers of this growth come in many forms, including more developers building great games, new game styles blurring the lines of traditional genres, and the explosion of gaming in emerging markets - most notably in India.

Image Source: GamesIndustry.biz

To support your growth, Google is focused on improving the game development experience on Android. We are investing in tools to give you better insights into what is happening on devices, as well as in people and teams to address your feedback about the development process, graphics, multiplayer experiences, and more.

We have some great updates and new tools to improve game discovery and monetization on Google Play, which we also shared today during our Mobile Developer Day:

Pre-registration now in general availability

Starting today, we are launching pre-registration for general availability. Set up a pre-registration campaign in the Google Play Console and start marketing your games to build awareness before launch. Users who pre-register receive a notification at launch, which helps increase day one installs.

Google Play Instant gaining adoption

We have seen strong adoption of Google Play Instant with 3x growth in the number of instant games and 5x growth in the number of instant sessions over the last six months. Instant experiences allow players to tap the 'Try Now' button on your store listing page and go straight to a demo experience in a matter of seconds, without installing. Now, they're even easier to build with Cocos and Unity plug-ins and an expanded implementation partner program. Discover the latest updates on Google Play Instant.

Android App Bundles momentum and new large download size threshold

Over 60K apps and games on Google Play are now using the Android App Bundle publishing format, which is supported in Android Studio, Unity, and Cocos Creator. The app bundle uses Google Play's Dynamic Delivery to deliver a smaller, optimized APK containing only the resources needed for a specific device.

To better support high quality game experiences and reflect improved devices, we've also increased the size limit for APKs generated from app bundles to 150MB and raised the threshold for large download user warnings on the Google Play Store to 150MB, from 100MB.

Improved tools in the Google Play Console

Store listing experiments let you A/B test changes to your store listing on actual Play Store visitors. We recently rolled out improvements, introducing two new metrics - first time installers and D1 retained users - to more accurately reflect the performance of your store listings. These two new metrics are now reported with hourly intervals and are available via email notifications, letting you see results faster and track performance better.

Country targeted store listings allow you to tailor your app's store listing to appeal to users in different countries. You can customize the app title, icon, descriptions and graphic assets, allowing you to better appeal to users in specific target markets. For example, you can now tailor your store listing with different versions of the English language for users in India versus the United States.

Rewarded ads give players the choice to watch an advertisement in exchange for in-app items. With rewarded ads in Google Play, you can now create and manage rewarded ads through the Google Play Console. No additional SDK integrations are required.

We hope you try some of these new tools and keep sharing ideas so we can make Android and Google Play a better place to grow your business. We are committed to continue improving the platform and building tools that better serve the gaming community.

Get started today by visiting two new resources, a hub for developers interested in creating games on Android and games.withgoogle.com, for developers looking to connect and scale their business across Google. Many of these updates and resources come from community suggestions, so sign up for our monthly newsletter to stay informed.

How useful did you find this blog post?

March 18th 2019, 1:36 pm

Introducing a new Google Play app and game icon specification

Android Developers Blog


Posted by Steve Suppe, Product Manager, Google Play
As part of our focus and dedication to improving the Google Play Store experience for our users, we are introducing new design specifications for your app icons.

Left to right: original icon, new icon (example), original icon in legacy mode


As of early April, you will be able to upload new icons to the Google Play Console and confirm you are compliant with the new specification. Original icons are still accepted in the Google Play Store during this time. As of May 1st, developers will no longer be able to upload icons in the Play Console which do not meet the new specifications, although existing original icons in the Google Play Store during this period can remain unchanged.
By June 24, we require you to:
  1. Update your icon to the new specification.
  2. Upload your icon to Play Console.
  3. Confirm in Play Console that your icon meets the new specification.
We highly recommend that you update your icons and confirm they meet the new specification as soon as possible to ensure that you provide the highest quality experience for users.

What exactly is changing?

Timelines Changes
Early April You can start uploading your new icons in Play Console and confirm they meet the new specification.
  • Original icons will continue to display correctly in Google Play.
  • New icons will display correctly in Google Play.
May 1st Any new icons uploaded in Play Console must be confirmed as meeting the new specification.
  • Original icons will continue to display correctly in Google Play.
  • New icons will display correctly in Google Play.
June 24th Original icons are converted to "legacy mode." You must confirm that any new icons uploaded in Play Console meet the new specification.
  • Original icons will be automatically converted to "legacy mode" icons.
  • New icons render correctly in the Google Play Store.

These updates will help us all provide a more unified and consistent look and feel for Google Play, allowing us to better showcase your apps and games and provide a higher quality user experience.
We will be keeping you up-to-date with these changes in the coming months - so look out for more updates. In the meantime, check out our new icon design specifications.
How useful did you find this blog post?


March 15th 2019, 1:15 pm

Giving users more control over their location data

Android Developers Blog

Posted by Jen Chai, Product Manager

Location data can deliver amazing, rich mobile experiences for users on Android such as finding a restaurant nearby, tracking the distance of a run, and getting turn-by-turn directions as you drive. Location is also one of the most sensitive types of personal information for a user. We want to give users simple, easy-to-understand controls for what data they are providing to apps, and yesterday, we announced in Android Q that we are giving users more control over location permissions. We are delighted by the innovative location experiences you provide to users through your apps, and we want to make this transition as straightforward for you as possible. This post dives deeper into the location permission changes in Q, what it may mean for your app, and how to get started with any updates needed.

Previously, a user had a single control to allow or deny an app access to device location, which covered location usage by the app both while it was in use and while it wasn't. Starting in Android Q, users have a new option to give an app access to location only when the app is being used; in other words, when the app is in the foreground. This means users will have a choice of three options for providing location to an app:

Some apps or features within an app may only need location while the app is being used. For example, if a feature allows a user to search for a restaurant nearby, the app only needs to understand the user's location when the user opens the app to search for a restaurant.

However, some apps may need location even when the app is not in use. For example, an app that automatically tracks the mileage you drive for tax filing, without requiring you to interact with the app.

The new location control allows users to decide when device location data is provided to an app and prevents an app from getting location data that it may not need. Users will see this new option in the same permissions dialog that is presented today when an app requests access to location. This permission can also be changed at any time for any app from Settings-> Location-> App permission.

Here's how to get started

We know these updates may impact your apps. We respect our developer community, and our goal is to approach any change like this very carefully. We want to support you as much as we can by (1) releasing developer-impacting features in the first Q Beta to give you as much time as possible to make any updates needed in your apps and (2) providing detailed information in follow-up posts like this one as well as in the developer guides and privacy checklist. Please let us know if there are ways we can make the guides more helpful!

If your app has a feature requiring "all the time" permission, you'll need to add the new ACCESS_BACKGROUND_LOCATION permission to your manifest file when you target Android Q. If your app targets Android 9 (API level 28) or lower, the ACCESS_BACKGROUND_LOCATION permission will be automatically added for you by the system if you request either ACCESS_FINE_LOCATION or ACCESS_COARSE_LOCATION. A user can decide to provide or remove these location permissions at any time through Settings. To maintain a good user experience, design your app to gracefully handle when your app doesn't have background location permission or when it doesn't have any access to location.

Users will also be more likely to grant the location permission if they clearly understand why your app needs it. Consider asking for the location permission from users in context, when the user is turning on or interacting with a feature that requires it, such as when they are searching for something nearby. In addition, only ask for the level of access required for that feature. In other words, don't ask for "all the time" permission if the feature only requires "while in use" permission.

To learn more, read the developer guide on how to handle the new location controls.

March 14th 2019, 3:13 pm

Android Jetpack Navigation Stable Release

Android Developers Blog

Posted by Ian Lake, Software Engineering Lead & Jisha Abubaker, Product Manager

Cohesive tooling and guidance for implementing predictable in-app navigation

Today we're happy to announce the stable release of the Android Jetpack Navigation component.

The Jetpack Navigation component's suite of libraries, tooling and guidance provides a robust, complete navigation framework, freeing you from the challenges of implementing navigation yourself and giving you certainty that all edge cases are handled correctly.

With the Jetpack Navigation component you can:

The Jetpack Navigation component adheres to the Principles of Navigation, providing consistent and predictable navigation no matter how simple or complex your app may be.

Simplify navigation code with Jetpack Navigation Libraries

The Jetpack Navigation component provides a framework for in-app navigation that makes it possible to abstract away the implementation details, keeping your app code free of navigation boilerplate.

To get started with the Jetpack Navigation component in your project, add the Navigation artifacts available on Google's Maven repository in Java or Kotlin to your app's build.gradle file:

 dependencies {
    def nav_version = 2.0.0

    // Java
    implementation "androidx.navigation:navigation-fragment:$nav_version"
    implementation "androidx.navigation:navigation-ui:$nav_version"

    // Kotlin KTX 
    implementation "androidx.navigation:navigation-fragment-ktx:$nav_version"
    implementation "androidx.navigation:navigation-ui-ktx:$nav_version"
  }

Note: If you have not yet migrated to androidx.*, the Jetpack Navigation stable component libraries are also available as android.arch.* artifacts in version 1.0.0.

navigation-runtime : This core library powers the navigation graph, which provides the structure of your in-app navigation: the screens or destinations that make up your app and the actions that link them. You can control how you navigate to destinations with a simple navigate() call. These destinations may be fragments, activities or custom destinations.

navigation-fragment: This library builds upon navigation-runtime and provides out-of-the-box support for fragments as destinations. With this library, fragment transactions are now handled for you automatically.

navigation-ui: This library allows you to easily add navigation drawers, menus and bottom navigation to your app consistent with the Material Design guidelines.

Each of these libraries provide an Android KTX artifact with the -ktx suffix that builds upon the Java API, taking advantage of Kotlin-specific language features.

Tools to help you build predictable navigation workflows

Available in Android Studio 3.3 and above, the Navigation Editor lets you visually create your navigation graph , allowing you to manage user journeys within your app.

With integration into the manifest merger tool, Android Studio can automatically generate the intent filters necessary to enable deep linking to a specific screen in your app. With this feature, you can associate URLs with any screen of your app by simply setting an attribute on the navigation destination.

Navigation often requires passing data from one screen to another. For example, your list screen may pass an item ID to a details screen. Many of the runtime exceptions during navigation have been attributed to a lack of type safety guarantees as you pass arguments. These exceptions are hard to replicate and debug. Learn how you can provide compile time type safety with the Safe Args Gradle Plugin.

Guidance to get it right on the first try

Check out our brand new set of developer guides that encompass best practices to help you implement navigation correctly:

What developers say

Here's what Emery Coxe, Android Lead @ HomeAway, has to say about the Jetpack Navigation component :

"The Navigation library is well-designed and fully configurable, allowing us to integrate the library according to our specific needs.

With the Navigation Library, we refactored our legacy navigation drawer to support a dynamic, runtime-based configuration using custom views. It allowed us to add / remove new screens to the top-level experience of our app without creating any interdependencies between discreetly packaged modules.

We were also able to get rid of all anti-patterns in our app around top-level navigation, removing explicit casts and hardcoded assumptions to instead rely directly on Navigation. This library is a fundamental component of modern Android development, and we intend to adopt it more broadly across our app moving forward.

Get started

Check out the migration guide and the developer guide to learn how you can get started using the Jetpack Navigation component in your app. We also offer a hands-on codelab and a sample app.

Also check out Google's Digital Wellbeing to see another real-world example of in-app navigation using the Android Jetpack Navigation component.

Feedback

Please continue to tell us about your experience with the Navigation component. If you have specific feedback on features or if you run into any issues, please file a bug via one of the following links:

March 14th 2019, 1:13 pm

Introducing Android Q Beta

Android Developers Blog

Posted by Dave Burke, VP of Engineering

In 2019, mobile innovation is stronger than ever, with new technologies from 5G to edge to edge displays and even foldable screens. Android is right at the center of this innovation cycle, and thanks to the broad ecosystem of partners across billions of devices, Android's helping push the boundaries of hardware and software bringing new experiences and capabilities to users.

As the mobile ecosystem evolves, Android is focused on helping users take advantage of the latest innovations, while making sure users' security and privacy are always a top priority. Building on top of efforts like Google Play Protect and runtime permissions, Android Q brings a number of additional privacy and security features for users, as well as enhancements for foldables, new APIs for connectivity, new media codecs and camera capabilities, NNAPI extensions, Vulkan 1.1 support, faster app startup, and more.

Today we're releasing Beta 1 of Android Q for early adopters and a preview SDK for developers. You can get started with Beta 1 today by enrolling any Pixel device (including the original Pixel and Pixel XL, which we've extended support for by popular demand!) Please let us know what you think! Read on for a taste of what's in Android Q, and we'll see you at Google I/O in May when we'll have even more to share.

Building on top of privacy protections in Android

Android was designed with security and privacy at the center. As Android has matured, we've added a wide range of features to protect users, like file-based encryption, OS controls requiring apps to request permission before accessing sensitive resources, locking down camera/mic background access, lockdown mode, encrypted backups, Google Play Protect (which scans over 50 billion apps a day to identify potentially harmful apps and remove them), and much more. In Android Q, we've made even more enhancements to protect our users. Many of these enhancements are part of our work in Project Strobe.

Giving users more control over location

With Android Q, the OS helps users have more control over when apps can get location. As in prior versions of the OS, apps can only get location once the app has asked you for permission, and you have granted it.

One thing that's particularly sensitive is apps' access to location while the app is not in use (in the background). Android Q enables users to give apps permission to see their location never, only when the app is in use (running), or all the time (when in the background).

For example, an app asking for a user's location for food delivery makes sense and the user may want to grant it the ability to do that. But since the app may not need location outside of when it's currently in use, the user may not want to grant that access. Android Q now offers this greater level of control. Read the developer guide for details on how to adapt your app for this new control. Look for more user-centric improvements to come in upcoming Betas. At the same time, our goal is to be very sensitive to always give developers as much notice and support as possible with these changes.

More privacy protections in Android Q

Beyond changes to location, we're making further updates to ensure transparency, give users control, and secure personal data.

In Android Q, the OS gives users even more control over apps, controlling access to shared files. Users will be able to control apps' access to the Photos and Videos or the Audio collections via new runtime permissions. For Downloads, apps must use the system file picker, which allows the user to decide which Download files the app can access. For developers, there are changes to how your apps can use shared areas on external storage. Make sure to read the Scoped Storage changes for details.

We've also seen that users (and developers!) get upset when an app unexpectedly jumps into the foreground and takes over focus. To reduce these interruptions, Android Q will prevent apps from launching an Activity while in the background. If your app is in the background and needs to get the user's attention quickly -- such as for incoming calls or alarms -- you can use a high-priority notification and provide a full-screen intent. See the documentation for more information.

We're limiting access to non-resettable device identifiers, including device IMEI, serial number, and similar identifiers. Read the best practices to help you choose the right identifiers for your use case, and see the details here. We're also randomizing the device's MAC address when connected to different Wi-Fi networks by default -- a setting that was optional in Android 9 Pie.

We are bringing these changes to you early, so you can have as much time as possible to prepare. We've also worked hard to provide developers detailed information up front, we recommend reviewing the detailed docs on the privacy changes and getting started with testing right away.

New ways to engage users

In Android Q, we're enabling new ways to bring users into your apps and streamlining the experience as they transition from other apps.

Foldables and innovative new screens

Foldable devices have opened up some innovative experiences and use-cases. To help your apps to take advantage of these and other large-screen devices, we've made a number of improvements in Android Q, including changes to onResume and onPause to support multi-resume and notify your app when it has focus. We've also changed how the resizeableActivity manifest attribute works, to help you manage how your app is displayed on foldable and large screens. To you get started building and testing on these new devices, we've been hard at work updating the Android Emulator to support multiple-display type switching -- more details coming soon!

Sharing shortcuts

When a user wants to share content like a photo with someone in another app, the process should be fast. In Android Q we're making this quicker and easier with Sharing Shortcuts, which let users jump directly into another app to share content. Developers can publish share targets that launch a specific activity in their apps with content attached, and these are shown to users in the share UI. Because they're published in advance, the share UI can load instantly when launched.

The Sharing Shortcuts mechanism is similar to how App Shortcuts works, so we've expanded the ShortcutInfo API to make the integration of both features easier. This new API is also supported in the new ShareTarget AndroidX library. This allows apps to use the new functionality, while allowing pre-Q devices to work using Direct Share. You can find an early sample app with source code here.

Settings Panels

You can now also show key system settings directly in the context of your app, through a new Settings Panel API, which takes advantage of the Slices feature that we introduced in Android 9 Pie.

A settings panel is a floating UI that you invoke from your app to show system settings that users might need, such as internet connectivity, NFC, and audio volume. For example, a browser could display a panel with connectivity settings like Airplane Mode, Wi-Fi (including nearby networks), and Mobile Data. There's no need to leave the app; users can manage settings as needed from the panel. To display a settings panel, just fire an intent with one of the new Settings.Panel actions.

Connectivity

In Android Q, we've extended what your apps can do with Android's connectivity stack and added new connectivity APIs.

Connectivity permissions, privacy, and security

Most of our APIs for scanning networks already require COARSE location permission, but in Android Q, for Bluetooth, Cellular and Wi-Fi, we're increasing the protection around those APIs by requiring the FINE location permission instead. If your app only needs to make peer-to-peer connections or suggest networks, check out the improved Wi-Fi APIs below -- they simplify connections and do not require location permission.

In addition to the randomized MAC addresses that Android Q provides when connected to different Wi-Fi networks, we're adding new Wi-Fi standard support, WP3 and OWE, to improve security for home and work networks as well as open/public networks.

Improved peer-to-peer and internet connectivity

In Android Q we refactored the Wi-Fi stack to improve privacy and performance, but also to improve common use-cases like managing IoT devices and suggesting internet connections -- without requiring the location permission.

The network connection APIs make it easier to manage IoT devices over local Wi-Fi, for peer-to-peer functions like configuring, downloading, or printing. Apps initiate connection requests indirectly by specifying preferred SSIDs & BSSIDs as WiFiNetworkSpecifiers. The platform handles the Wi-Fi scanning itself and displays matching networks in a Wi-Fi Picker. When the user chooses, the platform sets up the connection automatically.

The network suggestion APIs let apps surface preferred Wi-Fi networks to the user for internet connectivity. Apps initiate connections indirectly by providing a ranked list of networks and credentials as WifiNetworkSuggestions. The platform will seamlessly connect based on past performance when in range of those networks.

Wi-Fi performance mode

You can now request adaptive Wi-Fi in Android Q by enabling high performance and low latency modes. These will be of great benefit where low latency is important to the user experience, such as real-time gaming, active voice calls, and similar use-cases.

To use the new performance modes, call WifiManager.WifiLock.createWifiLock() with WIFI_MODE_FULL_LOW_LATENCY or WIFI_MODE_FULL_HIGH_PERF. In these modes, the platform works with the device firmware to meet the requirement with lowest power consumption.

Camera, media, graphics

Dynamic depth format for photos

Many cameras on mobile devices can simulate narrow depth of field by blurring the foreground or background relative to the subject. They capture depth metadata for various points in the image and apply a static blur to the image, after which they discard the depth metadata.

Starting in Android Q, apps can request a Dynamic Depth image which consists of a JPEG, XMP metadata related to depth related elements, and a depth and confidence map embedded in the same file on devices that advertise support.

Requesting a JPEG + Dynamic Depth image makes it possible for you to offer specialized blurs and bokeh options in your app. You can even use the data to create 3D images or support AR photography use-cases in the future. We're making Dynamic Depth an open format for the ecosystem, and we're working with our device-maker partners to make it available across devices running Android Q and later.

With Dynamic Depth image you can offer specialized blurs and bokeh options in your app.

New audio and video codecs

Android Q introduces support for the open source video codec AV1. This allows media providers to stream high quality video content to Android devices using less bandwidth. In addition, Android Q supports audio encoding using Opus - a codec optimized for speech and music streaming, and HDR10+ for high dynamic range video on devices that support it.

The MediaCodecInfo API introduces an easier way to determine the video rendering capabilities of an Android device. For any given codec, you can obtain a list of supported sizes and frame rates using VideoCodecCapabilities.getSupportedPerformancePoints(). This allows you to pick the best quality video content to render on any given device.

Native MIDI API

For apps that perform their audio processing in C++, Android Q introduces a native MIDI API to communicate with MIDI devices through the NDK. This API allows MIDI data to be retrieved inside an audio callback using a non-blocking read, enabling low latency processing of MIDI messages. Give it a try with the sample app and source code here.

ANGLE on Vulkan

To enable more consistency for game and graphics developers, we are working towards a standard, updateable OpenGL driver for all devices built on Vulkan. In Android Q we're adding experimental support for ANGLE on top of Vulkan on Android devices. ANGLE is a graphics abstraction layer designed for high-performance OpenGL compatibility across implementations. Through ANGLE, the many apps and games using OpenGL ES can take advantage of the performance and stability of Vulkan and benefit from a consistent, vendor-independent implementation of ES on Android devices. In Android Q, we're planning to support OpenGL ES 2.0, with ES 3.0 next on our roadmap.

We'll expand the implementation with more OpenGL functionality, bug fixes, and performance optimizations. See the docs for details on the current ANGLE support in Android, how to use it, and our plans moving forward. You can start testing with our initial support by opting-in through developer options in Settings. Give it a try today!

Vulkan everywhere

We're continuing to expand the impact of Vulkan on Android, our implementation of the low-overhead, cross-platform API for high-performance 3D graphics. Our goal is to make Vulkan on Android a broadly supported and consistent developer API for graphics. We're working together with our device manufacturer partners to make Vulkan 1.1 a requirement on all 64-bit devices running Android Q and higher, and a recommendation for all 32-bit devices. Going forward, this will help provide a uniform high-performance graphics API for apps and games to use.

Neural Networks API 1.2

Since introducing the Neural Networks API (NNAPI) in 2017, we've continued to expand the number of operations supported and improve existing functionality. In Android Q, we've added 60 new ops including ARGMAX, ARGMIN, quantized LSTM, alongside a range of performance optimisations. This lays the foundation for accelerating a much greater range of models -- such as those for object detection and image segmentation. We are working with hardware vendors and popular machine learning frameworks such as TensorFlow to optimize and roll out support for NNAPI 1.2.

Strengthening Android's Foundations

ART performance

Android Q introduces several new improvements to the ART runtime which help apps start faster and consume less memory, without requiring any work from developers.

Since Android Nougat, ART has offered Profile Guided Optimization (PGO), which speeds app startup over time by identifying and precompiling frequently executed parts of your code. To help with initial app startup, Google Play is now delivering cloud-based profiles along with APKs. These are anonymized, aggregate ART profiles that let ART pre-compile parts of your app even before it's run, giving a significant jump-start to the overall optimization process. Cloud-based profiles benefit all apps and they're already available to devices running Android P and higher.

We're also continuing to make improvements in ART itself. For example, in Android Q we've optimized the Zygote process by starting your app's process earlier and moving it to a security container, so it's ready to launch immediately. We're storing more information in the app's heap image, such as classes, and using threading to load the image faster. We're also adding Generational Garbage Collection to ART's Concurrent Copying (CC) Garbage Collector. Generational CC is more efficient as it collects young-generation objects separately, incurring much lower cost as compared to full-heap GC, while still reclaiming a good amount of space. This makes garbage collection overall more efficient in terms of time and CPU, reducing jank and helping apps run better on lower-end devices.

Security for apps

BiometricPrompt is our unified authentication framework to support biometrics at a system level. In Android Q we're extending support for passive authentication methods such as face, and adding implicit and explicit authentication flows. In the explicit flow, the user must explicitly confirm the transaction in the TEE during the authentication. The implicit flow is designed for a lighter-weight alternative for transactions with passive authentication. We've also improved the fallback for device credentials when needed.

Android Q adds support for TLS 1.3, a major revision to the TLS standard that includes performance benefits and enhanced security. Our benchmarks indicate that secure connections can be established as much as 40% faster with TLS 1.3 compared to TLS 1.2. TLS 1.3 is enabled by default for all TLS connections. See the docs for details.

Compatibility through public APIs

Another thing we all care about is ensuring that apps run smoothly as the OS changes and evolves. Apps using non-SDK APIs risk crashes for users and emergency rollouts for developers. In Android Q we're continuing our long-term effort begun in Android P to move apps toward only using public APIs. We know that moving your app away from non-SDK APIs will take time, so we're giving you advance notice.

In Android Q we're restricting access to more non-SDK interfaces and asking you to use the public equivalents instead. To help you make the transition and prevent your apps from breaking, we're enabling the restrictions only when your app is targeting Android Q. We'll continue adding public alternative APIs based on your requests; in cases where there is no public API that meets your use case, please let us know.

It's important to test your apps for uses of non-SDK interfaces. We recommend using the StrictMode method detectNonSdkApiUsage() to warn when your app accesses non-SDK APIs via reflection or JNI. Even if the APIs are exempted (grey-listed) at this time, it's best to plan for the future and eliminate their use to reduce compatibility issues. For more details on the restrictions in Android Q, see the developer guide.

Modern Android

We're expanding our efforts to have all apps take full advantage of the security and performance features in the latest version of Android. Later this year, Google Play will require you to set your app's targetSdkVersion to 28 (Android 9 Pie) in new apps and updates. In line with these changes, Android Q will warn users with a dialog when they first run an app that targets a platform earlier than API level 23 (Android Marshmallow). Here's a checklist of resources to help you migrate your app.

We're also moving the ecosystem toward readiness for 64-bit devices. Later this year, Google Play will require 64-bit support in all apps. If your app uses native SDKs or libraries, keep in mind that you'll need to provide 64-bit compliant versions of those SDKs or libraries. See the developer guide for details on how to get ready.

Get started with Android Q Beta

With important privacy features that are likely to affect your apps, we recommend getting started with testing right away. In particular, you'll want to enable and test with Android Q storage changes, new location permission states, restrictions on background app launch, and restrictions on device identifiers. See the privacy documentation for details.

To get started, just install your current app from Google Play onto a device or Android Virtual Device running Android Q Beta and work through the user flows. The app should run and look great, and handle the Android Q behavior changes for all apps properly. If you find issues, we recommend fixing them in the current app, without changing your targeting level. Take a look at the migration guide for steps and a recommended timeline.

Next, update your app's targetSdkVersion to 'Q' as soon as possible. This lets you test your app with all of the privacy and security features in Android Q, as well as any other behavior changes for apps targeting Q.

Explore the new features and APIs

When you're ready, dive into Android Q and learn about the new features and APIs you can use in your apps. Take a look at the API diff report, the Android Q Beta API reference, and developer guides as a starting point. Also, on the Android Q Beta developer site, you'll find release notes and support resources for reporting issues.

To build with Android Q, download the Android Q Beta SDK and tools into Android Studio 3.3 or higher, and follow these instructions to configure your environment. If you want the latest fixes for Android Q related changes, we recommend you use Android Studio 3.5 or higher.

How do I get Android Q Beta?

It's easy - you can enroll here to get Android Q Beta updates over-the-air, on any Pixel device (and this year we're supporting all three generations of Pixel -- Pixel 3, Pixel 2, and even the original Pixel!). Downloadable system images for those devices are also available. If you don't have a Pixel device, you can use the Android Emulator, and download the latest emulator system images via the SDK Manager in Android Studio.

We plan to update the preview system images and SDK regularly throughout the preview. We'll have more features to share as the Beta program moves forward.

As always, your feedback is critical, so please let us know what you think — the sooner we hear from you, the more of your feedback we can integrate. When you find issues, please report them here. We have separate hotlists for filing platform issues, app compatibility issues, and third-party SDK issues.

March 13th 2019, 3:48 pm

Grow your indie game with Google Play

Android Developers Blog

Posted by Patricia Correa, Director, Platforms & Ecosystems Developer Marketing

Google Play empowers game developers of all sizes to engage and delight people everywhere, and build successful businesses too. We are inspired by the passion and creativity we see from the indie games community, and, over the past few years, we've invested in and nurtured indie games developers around the world, helping them express their unique voice and bring ideas to life.

This year, we've put together several initiatives to help the indie community.

Indie Games Showcase

For indie developers who are constantly pushing the boundaries of storytelling, visual excellence, and creativity in mobile we are announcing today the Indie Games Showcase, an international competition for games studios from Europe*, South Korea and Japan. Those of you who meet the eligibility criteria (as outlined below) can enter your game for a chance to win several prizes, including:

How to enter the competition

If you're over 18 years old, based in one of the eligible countries, have 30 or less full time employees, and have published a new game on Google Play after 1 January 2018, you can enter your game. If you're planning on publishing a new game soon, you can also enter by submitting a private beta. Submissions close on May 6 2019. Check out all the details in the terms and conditions for each region. Enter now!

Indie Games Accelerator

Last year we launched our first games accelerator for developers in Southeast Asia, India and Pakistan and saw great results. We are happy to announce that we are expanding the format to accept developers from select countries in the Middle East, Africa, and Latin America, with applications for the 2019 cohort opening soon. The Indie Games Accelerator is a 6 month intensive program for top games startups, powered by mentors from the gaming industry as well as Google experts, offering a comprehensive curriculum that covers all aspects of building a great game and company.

Mobile Developer Day at GDC

We will be hosting our annual Developer Day at the Game Developers Conference in San Francisco on Monday, March 18th. Join us for a full day of sessions covering tools and best practices to help build a successful mobile games business. We'll focus on game quality, effective monetization and growth strategies, and how to create, connect, and scale with Google. Sign up to stay up to date or join us via livestream.

Developer Days

We also want to engage with you in person with a series of events. We will be announcing them shortly, so please make sure to sign up to our newsletter to get notified about events and programs for indie developers.

Academy for App Success

Looking for tips on how to use various developer tools in the Play Console? Get free training through our e-learning program, the Academy for App Success. We even have a custom Play Console for game developers course to get a jump start on Google Play.

We look forward to seeing your amazing work and sharing your creativity with other developers, gamers and industry experts around the world. And don't forget to submit your game for a chance to get featured on Indie Corner on Google Play.

* The competition is open to developers from the following European countries: Austria, Belgium, Belarus, Czech Republic, Denmark, Finland, France, Germany, Israel, Italy, Netherlands, Norway, Poland, Romania, Russia, Slovakia, Spain, Sweden, Ukraine, and the United Kingdom (including Northern Ireland).


How useful did you find this blog post?

March 13th 2019, 2:07 am

Supplement your earnings with Rewarded Products

Android Developers Blog

Posted by Patrick Davis, Product Manager, Google Play

Developers are increasingly using multiple methods to monetize their apps and games. One trend has been to reward users for a monetizable action, like watching a video, with in-game currency or other benefits. This gives users more choice in how they experience the app or game, and has been an effective way to monetize non-paying users.

To support this monetization method, Google Play is excited to announce rewarded products, a new product type now available in open beta in the Play Console.

Rewarded products make it easy for Google Play developers to increase their monetized user base. Our first rewarded product offering will be in a video format. Users can elect to watch a video advertisement and upon completion be rewarded with virtual goods or in-game currency. In the example below, the user selects "watch ad", views the video, and then is granted 100 coins.

Rewarded products can be added to any app using the Google Play Billing Library or AIDL interface with only a few additional API calls. No extra SDK integration is required. This significantly reduces the work required to implement compared to other offerings. Rewarded products are powered by AdMob technology to give access to the broad range of content from advertisers currently working with Google.

Get started with rewarded products.

For developers interested in best practices on how to diversify revenue and how rewarded products fit in, please visit us at Google's Mobile Developer day at GDC or watch via the livestream.

How useful did you find this blog post?

March 6th 2019, 1:14 pm

Android Jetpack WorkManager Stable Release

Android Developers Blog

Posted by Sumir Kataria, Software Engineering Lead & Jisha Abubaker, Product Manager

Simplify how you manage background work with WorkManager

Today, we're happy to announce the release of Android Jetpack WorkManager 1.0 Stable. We want to thank so many of you in our dev community who have given us feedback and logged bugs along the way - we've gotten here thanks to your help!

When we looked at the top problems faced by developers, we saw that doing background processing reliably and in a battery-friendly manner was a huge challenge. This meant that periodically fetching fresh content or uploading your logs was complex. Different versions of Android provided different tools for the job, each with their own API quirks. For example, listening for network or storage availability and automatically retrying your tasks involved a lot of work.

Our answer to these challenges was WorkManager. We introduced a preview of the Android Jetpack WorkManager library at Google I/O 2018 and have since iterated on it with additional features and bug fixes thanks to your valuable input.

The goal of WorkManager is to make background operations easy for you. WorkManager takes into account constraints like battery-optimization, storage, or network availability, and it only runs its tasks when the appropriate conditions are met. It also knows when to retry or reschedule your work--even if your device or app restarts.

We believe WorkManager is a friendly, approachable API that can take care of one of the most complex parts of Android for you so you can focus on the code that makes your app unique.

WorkManager Highlights

Here are some key features of WorkManager:

Watch and read below to learn when and how to use WorkManager to simplify managing background work in your apps:

When to use WorkManager

WorkManager is best suited for tasks that can be deferred, but are still expected to run even if the application or device restarts (for example, syncing data periodically with a backend service and uploading logs or analytics data).

For tasks like sending an instant message that are required to run immediately or for tasks that are not required to run after the app exits, take a look at our background processing guide to learn which solution meets your needs.

How to use WorkManager

To get started with the WorkManager API, add the WorkManager dependency available on Google's Maven repository in Java or Kotlin to your application's build.gradle file:

dependencies {
    def work_version = 1.0.0

    // Java
    implementation "android.arch.work:work-runtime:$work_version"

    // Kotlin KTX + coroutines
    implementation "android.arch.work:work-runtime-ktx:$work_version"
  }

Now, simply subclass a Worker and implement your background work with doWork() and enqueue it with WorkManager.

class MyWorker(ctx: Context, params: WorkerParameters)
  : Worker(ctx, params) {
  override fun WorkerResult doWork() {
    //do the work you want done in the background here
    return Result.success()
  }
}

// optionally, add constraints like power, network availability
val constraints: Constraints = Constraints.Builder()
     .setRequiresCharging(true)
                .setRequiredNetworkType(NetworkType.CONNECTED)
                .build()

val myWork = OneTimeWorkRequestBuilder<MyWorker>()
                .setConstraints(constraints).build()

// Now, enqueue your work
WorkManager.getInstance().enqueue(myWork)

WorkManager will now take care of running your task when it detects that your device is charging and the network is available.

Why use WorkManager

Backward compatibility

WorkManager will leverage the right scheduling API under the hood: it uses JobScheduler API on Android 6.0+ (API 23+) and a combination of AlarmManager and BroadcastReceiver on previous versions.

It also seeks to ensure the best possible behavior so that it complies with system optimizations introduced in newer Android API versions to maximize battery and enforce good app behavior.

For example, WorkManager will schedule background work during the maintenance window for Android 6.0+ (API 23+) devices when the system is in Doze mode.

Reliable scheduling

With WorkManager, you can easily add constraints like network availability or charging status. Your work will run when the constraints are met and automatically retried if they fail while running. For example, if your task requires network to be available, the task will be stopped when network is no longer available and retried later.

You can also monitor work status and retrieve work result using LiveData. This allows your UI to be notified when your task is completed.

In the event that your work fails, you can control how your work is retried by configuring how backoff is handled.

WorkManager is also able to reschedule your work, using a record of your work in its local database, if an application or device restart occurs.

Control over how your work is run

We understand that each app has unique needs, and so do your tasks--even within the same app. WorkManager provides a simple yet highly flexible API surface to help configure your work and how it is run.

Take advantage of one-off scheduling with OneTimeWorkRequest or recurrent scheduling with PeriodicWorkRequest.

You can also chain your one time work requests to run in order or in parallel. If any work in the chain fails, WorkManager seeks to ensure that the remaining chain of work will not run. Read more about chaining work requests here.

If you require more flexibility over how WorkManager parallelizes and manages work, check out our advanced threading guide.

What developers have to say

redBus, the largest online bus ticketing platform, shares their experience using WorkManager to simplify how they collect user feedback in their Android app:

"Feedback is critical to redBus as we expand into other countries. It often happens that a user gives critical feedback about a functionality within the redBus app but when the app tries to upload the feedback to backend servers, there might not be enough network coverage or battery.
WorkManager has simplified the way redBus app delivers information to it's backend servers. WorkManager library's capability to handle parameters like network connectivity, battery and use appropriate handlers like AlarmManager or JobScheduler has enabled us to concentrate on building business logics and offloading execution complexity to WorkManager."

- Dinesh Shanmugam

Android Lead, redBus.in

Get started with WorkManager

Check out our getting started guide and hands-on codelab to start using the WorkManager library for your background task needs.

We appreciate your feedback, including features you like and features you would like to see.

If you find a bug or issue, feel free to file an issue.

March 5th 2019, 1:09 pm

Android Security Improvement update: Helping developers harden their apps, one thwarted vulnerabilit

Android Developers Blog

Posted by Patrick Mutchler and Meghan Kelly, Android Security & Privacy Team

Helping Android app developers build secure apps, free of known vulnerabilities, means helping the overall ecosystem thrive. This is why we launched the Application Security Improvement Program five years ago, and why we're still so invested in its success today.

What the Android Security Improvement Program does

When an app is submitted to the Google Play store, we scan it to determine if a variety of vulnerabilities are present. If we find something concerning, we flag it to the developer and then help them to remedy the situation.

Think of it like a routine physical. If there are no problems, the app runs through our normal tests and continues on the process to being published in the Play Store. If there is a problem, however, we provide a diagnosis and next steps to get back to healthy form.

Over its lifetime, the program has helped more than 300,000 developers to fix more than 1,000,000 apps on Google Play. In 2018 alone, the program helped over 30,000 developers fix over 75,000 apps. The downstream effect means that those 75,000 vulnerable apps are not distributed to users with the same security issues present, which we consider a win.

What vulnerabilities are covered

The App Security Improvement program covers a broad range of security issues in Android apps. These can be as specific as security issues in certain versions of popular libraries (ex: CVE-2015-5256) and as broad as unsafe TLS/SSL certificate validation.

We are continuously improving this program's capabilities by improving the existing checks and launching checks for more classes of security vulnerability. In 2018, we deployed warnings for six additional security vulnerability classes including:

  1. SQL Injection
  2. File-based Cross-Site Scripting
  3. Cross-App Scripting
  4. Leaked Third-Party Credentials
  5. Scheme Hijacking
  6. JavaScript Interface Injection

Ensuring that we're continuing to evolve the program as new exploits emerge is a top priority for us. We are continuing to work on this throughout 2019.

Keeping Android users safe is important to Google. We know that app security is often tricky and that developers can make mistakes. We hope to see this program grow in the years to come, helping developers worldwide build apps users can truly trust.

February 28th 2019, 1:19 pm

Google Play Protect in 2018: New updates to keep Android users secure

Android Developers Blog

Posted by Rahul Mishra and Tom Watkins, Android Security & Privacy Team

In 2018, Google Play Protect made Android devices running Google Play some of the most secure smartphones available, scanning over 50 billion apps everyday for harmful behaviour.

Android devices can genuinely improve people's lives through our accessibility features, Google Assistant, digital wellbeing, Family Link, and more — but we can only do this if they are safe and secure enough to earn users' long term trust. This is Google Play Protect's charter and we're encouraged by this past year's advancements.

Google Play Protect, a refresher

Google Play Protect is the technology we use to ensure that any device shipping with the Google Play Store is secured against potentially harmful applications (PHA). It is made up of a giant backend scanning engine to aid our analysts in sourcing and vetting applications made available on the Play Store, and built-in protection that scans apps on users' devices, immobilizing PHA and warning users.

This technology protects over 2 billion devices in the Android ecosystem every day.

What's new

On by default

We strongly believe that security should be a built-in feature of every device, not something a user needs to find and enable. When security features function at their best, most users do not need to be aware of them. To this end, we are pleased to announce that Google Play Protect is now enabled by default to secure all new devices, right out of the box. The user is notified that Google Play Protect is running, and has the option to turn it off whenever desired.

New and rare apps

Android is deployed in many diverse ways across many different users. We know that the ecosystem would not be as powerful and vibrant as it is today without an equally diverse array of apps to choose from. But installing new apps, especially from unknown sources, can carry risk.

Last year we launched a new feature that notifies users when they are installing new or rare apps that are rarely installed in the ecosystem. In these scenarios, the feature shows a warning, giving users pause to consider whether they want to trust this app, and advising them to take additional care and check the source of installation. Once Google has fully analyzed the app and determined that it is not harmful, the notification will no longer display. In 2018, this warning showed around 100,000 times per day

Context is everything: warning users on launch

It's easy to misunderstand alerts when presented out of context. We're trained to click through notifications without reading them and get back to what we were doing as quickly as possible. We know that providing timely and context-sensitive alerts to users is critical for them to be of value. We recently enabled a security feature first introduced in Android Oreo which warns users when they are about to launch a potentially harmful app on their device.

This new warning dialog provides in-context information about which app the user is about to launch, why we think it may be harmful and what might happen if they open the app. We also provide clear guidance on what to do next. These in-context dialogs ensure users are protected even if they accidentally missed an alert.

Auto-disabling apps

Google Play Protect has long been able to disable the most harmful categories of apps on users devices automatically, providing robust protection where we believe harm will be done.

In 2018, we extended this coverage to apps installed from Play that were later found to have violated Google Play's policies, e.g. on privacy, deceptive behavior or content. These apps have been suspended and removed from the Google Play Store.

This does not remove the app from user device, but it does notify the user and prevents them from opening the app accidentally. The notification gives the option to remove the app entirely.

Keeping the Android ecosystem secure is no easy task, but we firmly believe that Google Play Protect is an important security layer that's used to protect users devices and their data while maintaining the freedom, diversity and openness that makes Android, well, Android.

Acknowledgements: This post leveraged contributions from Meghan Kelly and William Luh.

February 26th 2019, 1:26 pm

Expanding target API level requirements in 2019

Android Developers Blog

Posted by Edward Cunningham, Android Security & Privacy Team

In a previous blog we described how API behavior changes advance the security and privacy protections of Android, and include user experience improvements that prevent apps from accidentally overusing resources like battery and memory.

Since November 2018, all app updates on Google Play have been required to target API level 26 (Android 8.0) or higher. Thanks to the efforts of thousands of app developers, Android users now enjoy more apps using modern APIs than ever before, bringing significant security and privacy benefits. For example, during 2018 over 150,000 apps added support for runtime permissions, giving users granular control over the data they share.

Today we're providing more information about the Google Play requirements for 2019, and announcing some changes that affect apps distributed via other stores.

Google Play requirements for 2019

In order to provide users with the best Android experience possible, the Google Play Console will continue to require that apps target a recent API level:

Existing apps that are not receiving updates are unaffected and can continue to be downloaded from the Play Store. Apps can still use any minSdkVersion, so there is no change to your ability to build apps for older Android versions.

For a list of changes introduced in Android 9 Pie, check out our page on behavior changes for apps targeting API level 28+.

Apps distributed via other stores

Targeting a recent API level is valuable regardless of how an app is distributed. In China, major app stores from Huawei, OPPO, Vivo, Xiaomi, Baidu, Alibaba, and Tencent will be requiring that apps target API level 26 (Android 8.0) or higher in 2019. We expect many others to introduce similar requirements – an important step to improve the security of the app ecosystem.

Over 95% of spyware we detect outside of the Play Store intentionally targets API level 22 or lower, avoiding runtime permissions even when installed on recent Android versions. To protect users from malware, and support this ecosystem initiative, Google Play Protect will warn users when they attempt to install APKs from any source that do not target a recent API level:

These Play Protect warnings will show only if the app's targetSdkVersion is lower than the device API level. For example, a user with a device running Android 6.0 (Marshmallow) will be warned when installing any new APK that targets API level 22 or lower. Users with devices running Android 8.0 (Oreo) or higher will be warned when installing any new APK that targets API level 25 or lower.

Prior to August, Play Protect will start showing these warnings on devices with Developer options enabled to give advance notice to developers of apps outside of the Play Store. To ensure compatibility across all Android versions, developers should make sure that new versions of any apps target API level 26+.

Existing apps that have been released (via any distribution channel) and are not receiving updates will be unaffected – users will not be warned when installing them.

Getting started

For advice on how to change your app’s target API level, take a look at the migration guide and this talk from I/O 2018: Migrate your existing app to target Android Oreo and above.

We're extremely grateful to the Android developers worldwide who have already updated their apps to deliver security improvements for their users. We look forward to making great progress together in 2019.

February 21st 2019, 2:36 pm

How we fought bad apps and malicious developers in 2018

Android Developers Blog

Posted by Andrew Ahn, Product Manager, Google Play

Google Play is committed to providing a secure and safe platform for billions of Android users on their journey discovering and experiencing the apps they love and enjoy. To deliver against this commitment, we worked last year to improve our abuse detection technologies and systems, and significantly increased our team of product managers, engineers, policy experts, and operations leaders to fight against bad actors.

In 2018, we introduced a series of new policies to protect users from new abuse trends, detected and removed malicious developers faster, and stopped more malicious apps from entering the Google Play Store than ever before. The number of rejected app submissions increased by more than 55 percent, and we increased app suspensions by more than 66 percent. These increases can be attributed to our continued efforts to tighten policies to reduce the number of harmful apps on the Play Store, as well as our investments in automated protections and human review processes that play critical roles in identifying and enforcing on bad apps.

In addition to identifying and stopping bad apps from entering the Play Store, our Google Play Protect system now scans over 50 billion apps on users' devices each day to make sure apps installed on the device aren't behaving in harmful ways. With such protection, apps from Google Play are eight times less likely to harm a user's device than Android apps from other sources.

Here are some areas we've been focusing on in the last year and that will continue to be a priority for us in 2019:

Protecting User Privacy

Protecting users' data and privacy is a critical factor in building user trust. We've long required developers to limit their device permission requests to what's necessary to provide the features of an app. Also, to help users understand how their data is being used, we've required developers to provide prominent disclosures about the collection and use of sensitive user data. Last year, we rejected or removed tens of thousands of apps that weren't in compliance with Play's policies related to user data and privacy.

In October 2018, we announced a new policy restricting the use of the SMS and Call Log permissions to a limited number of cases, such as where an app has been selected as the user's default app for making calls or sending text messages. We've recently started to remove apps from Google Play that violate this policy. We plan to introduce additional policies for device permissions and user data throughout 2019.

Developer integrity

We find that over 80% of severe policy violations are conducted by repeat offenders and abusive developer networks. When malicious developers are banned, they often create new accounts or buy developer accounts on the black market in order to come back to Google Play. We've further enhanced our clustering and account matching technologies, and by combining these technologies with the expertise of our human reviewers, we've made it more difficult for spammy developer networks to gain installs by blocking their apps from being published in the first place.

Harmful app contents and behaviors

As mentioned in last year's blog post, we fought against hundreds of thousands of impersonators, apps with inappropriate content, and Potentially Harmful Applications (PHAs). In a continued fight against these types of apps, not only do we apply advanced machine learning models to spot suspicious apps, we also conduct static and dynamic analyses, intelligently use user engagement and feedback data, and leverage skilled human reviews, which have helped in finding more bad apps with higher accuracy and efficiency.

Despite our enhanced and added layers of defense against bad apps, we know bad actors will continue to try to evade our systems by changing their tactics and cloaking bad behaviors. We will continue to enhance our capabilities to counter such adversarial behavior, and work relentlessly to provide our users with a secure and safe app store.

How useful did you find this blog post?

February 13th 2019, 1:18 pm

An Update on Android Things

Android Developers Blog

Posted by Dave Smith, Developer Advocate for IoT

Over the past year, Google has worked closely with partners to create consumer products powered by Android Things with the Google Assistant built-in. Given the successes we have seen with our partners in smart speakers and smart displays, we are refocusing Android Things as a platform for OEM partners to build devices in those categories moving forward. Therefore, support for production System on Modules (SoMs) based on NXP, Qualcomm, and MediaTek hardware will not be made available through the public developer platform at this time.

Android Things continues to be a platform for experimenting with and building smart, connected devices using the Android Things SDK on top of popular hardware like the NXP i.MX7D and Raspberry Pi 3B. System images for these boards will remain available through the Android Things console where developers can create new builds and push app updates for up to 100 devices for non-commercial use.

We remain dedicated to providing a managed platform for IoT devices, including turnkey hardware solutions. For developers looking to commercialize IoT products in 2019, check out Cloud IoT Core for secure device connectivity at scale and the upcoming Cloud IoT Edge runtime for a suite of managed edge computing services. For on-device machine learning applications, stay tuned for more details about our Edge TPU development boards.

February 12th 2019, 2:00 pm

Google releases source code of Santa Tracker for Android 2018

Android Developers Blog

Posted by Chris Banes, Chief Elf of Android Engineering

Today, we pushed the source code for Google's Santa Tracker 2018 Android app at google/santa-tracker-android, including its 17 mini-games, Santa tracking feature, Wear app and more!

Visually the app looks much the same this year, but underneath the hood the app has gone on a massive size reduction exercise to make the download from Google Play as small as possible. When a user downloads the app the initial download is now just 9.2MB, compared to last year's app which was 60MB. That's a 85% reduction! 🗜️

Android App Bundle

We achieved that reduction by migrating the app over to using an Android App Bundle. The main benefit is that Google Play can now serve dynamically optimized APKs to users' devices. Moreover, we were also able to separate out all of the games into their own dynamic feature modules, downloaded on demand. This is why you might have seen a progress bar when you first opened a game, we are actually downloading the game from Google Play before starting the game:

The progress bar shown while a game is fetched from Google Play

You can read more about our journey migrating over to App Bundle in a small blog series, starting with our 'Moving to Android App Bundle' post.

Gboard stickers

One of the new features we added this year was a Gboard sticker pack, allowing users to share stickers to their friends. You might even notice some of the characters from the games in the stickers!

'Santa Dunk' is one of the 20 available stickers

We use Firebase App Indexing to publish our stickers to the local index on the device, where the Gboard keyboard app picks them up, allowing the user to share them in apps. You can see the source code here.

The sticker pack being used in a very important conversation

Lots of code improvements

Aside from the things mentioned above, we've also completed a number of code health improvements. We have increased the minimum SDK version to Lollipop (21), migrated from the Support Library to AndroidX, reduced the file size of our game assets by switching to modern formats, and lots of other small improvements! Phew 😅.

Go explore the code

If you're interested go checkout the code and let us know what you think. If you have any questions or issues, please let us know via the issue tracker.

January 29th 2019, 1:04 pm

Google Mobile Developer Day is coming to GDC 2019

Android Developers Blog

Posted by Kacey Fahey, Developer Marketing, Google Play

We're excited to be part of the Game Developers Conference (GDC) 2019 in San Francisco. Join us on Monday, March 18th at the Google Mobile Developer Day, either in person or over live stream, for a full day of sessions covering tools and best practices to help build a successful mobile games business on Google Play. We'll focus on game quality, effective monetization and growth strategies, and how to create, connect, and scale with Google.

This year's sessions are focused on tips and tools to help your mobile game business succeed. Come hear our latest announcements and industry trends, as well as learnings from industry peers. We will hold a more technical session in the second half of the day, where we'll share ways to optimize your mobile game's performance for the best possible player experience.

Also, make sure to visit the Google booth from Wednesday March 20th until Friday March 22nd. Here, you will be able to interact with hands-on demos, attend talks in the theater, and get your questions answered by Google experts. We're bringing a big team and hope to see you there.

Learn more about Google's activities throughout the week of GDC and sign up to stay informed. For those who can't make it in person, join the live stream starting at 10am PST on Monday, March 18th. These events are part of the official Game Developers Conference and require a pass to attend.

How useful did you find this blog post?

January 23rd 2019, 12:49 pm

Grow your app business internationally through localization on Google Play

Android Developers Blog

Posted by Chris Yang, Program Manager, Translation Service

It is not uncommon for developers to have the following concerns and thoughts when considering whether to localize their apps: "I just don't have the time!" "Translation is too expensive." "High-quality translation is just hard to find.'' Does this sound familiar?

At Google, we consider translation a key component of making the world's information universally accessible and useful. This commitment extends not only to localizing our own products, but also to providing tools to help developers and translators more easily localize their apps.

Introducing the Google Play App Translation Service

Available in the Google Play Console, the Google Play App Translation Service simplifies localization of your app user interface strings, store listing, in-app product names, and universal apps campaign ads. Thousands of developers have already used this service to reach hundreds of millions of users worldwide.

Here is an overview of some of the ways it can help:

1. Quick and easy - Order in minutes and receive your translation in as little as two days.

2. Professional and human - Get high-quality translations by real human translators.

3. Value for money - Translate your app for as little as $0.07 per word.

Ordering a Translation

Find the Translation Service in the Google Play Console:

When you're ready to translate, just select the languages to use for translation, choose a vendor, and place your order.

Select languages to translate into.

Choose what type of content you want to translate.

Easily complete purchase of the service.

Language recommendations

You can also expand your global footprint with translation recommendations that can help increase installs. The recommendations can be found in the Google Play Console.

The language recommendation feature is developed using machine learning and is based on your app's install history and market data.

Did you know that you can reach almost 80% of internet users worldwide with only 10 languages. In particular, the Google Play opportunity in Russia and the Middle East continues to grow. Let us know once you have localized for these markets so we can consider featuring your app or game in the Now in Russian and Now in Arabic collections on the Play Store.

Launching the translation

Once you download the translation, you'll be ready to publish your newly translated app update on Google Play.

Get started with the App Translation Service today and let us know what you think!

How useful did you find this blog post?

January 16th 2019, 4:27 pm

Get your apps ready for the 64-bit requirement

Android Developers Blog

Posted by Vlad Radu, Product Manager, Play and Diana Wong, Product Manager, Android

64-bit CPUs deliver faster, richer experiences for your users. Adding a 64-bit version of your app provides performance improvements, makes way for future innovation, and sets you up for devices with 64-bit only hardware.

We want to help you get ready and know you need time to plan. We’ve supported 64-bit CPUs since Android 5.0 Lollipop and in 2017 we first announced that apps using native code must provide a 64-bit version (in addition to the 32-bit version). Today we’re providing more detailed information and timelines to make it as easy as possible to transition in 2019.

The 64-bit requirement: what it means for developers

Starting August 1, 2019:

Starting August 1, 2021:

The requirement does not apply to:

We are not making changes to our policy on 32-bit support. Play will continue to deliver apps to 32-bit devices. This requirement means that apps with 32-bit native code will need to have an additional 64-bit version as well.

Preparing for the 64-bit requirement

We anticipate that for most developers, the move to 64-bit should be straightforward. Many apps are written entirely in non-native code (e.g. the Java programming language or Kotlin) and do not need code changes.

All developers: Here is an overview of the steps you will need to take in order to become 64-bit compliant. For a more detailed outline of this process refer to our in-depth documentation.

Inspect your APK or app bundle for native code. You can check for .so files using APK Analyzer. Identify whether they are built from your own code or are imported by an SDK or library that you are using. If you do not have any .so files in your APK, you are already 64-bit compliant.

Enable 64-bit architectures and rebuild native code (.so files) imported by your own code. See the documentation for more details.

Game developers: The three most used engines all currently support 64-bit (Unreal & Cocos2d since 2015, Unity since 2018). We understand that migrating a 3rd party game engine is an intensive process with long lead times.

Since Unity only recently began providing 64-bit support in versions 2017.4 and 2018.2, we are granting an automatic extension to existing games using versions 5.6 or older until August 2021. Unity provides guides that can help you through the process of upgrading to a 64-bit compliant version.

SDK and library owners: Update for 64-bit compliance as soon as possible to give app developers time to adapt, and let your developers know. Sign up and register your SDK to receive updates about the latest tools and information that can help you serve your customers.

Going forward

For those that already support 64-bit - thank you and great work! If you haven’t yet, we encourage you to begin any work for the 64-bit requirement as soon as possible. As we move closer to the deadline, we’ll be updating our developer documentation with more information on how to check if your app is compliant.

We’re excited about the future that 64-bit CPUs bring in areas such as artificial intelligence, machine learning, and immersive mobile. Supporting 64-bit prepares the ecosystem for the innovation enabled by the advanced compute capabilities of 64-bit devices, and for future Android devices that only support 64-bit code.

How useful did you find this blog post?

January 15th 2019, 5:21 pm

Reminder SMS/Call Log Policy Changes

Android Developers Blog

Posted by Paul Bankhead, Director, Product Management, Google Play

TLDR; As previously announced and directly communicated to developers via email, we'll be removing apps from the Google Play Store that ask for SMS or Call Log permission. If you have not submitted a permissions declaration form and you're app is removed, see below for next steps.

We take access to sensitive data and permissions very seriously. This is especially true with SMS and Call Log permissions, which were designed to allow users to pick their favorite dialer or messaging app, but have also been used to enable many other experiences that might not require that same level of access. In an effort to improve users' control over their data, last October we announced we would be restricting developer access to SMS and Call Log permissions.

Our new policy is designed to ensure that apps asking for these permissions need full and ongoing access to the sensitive data in order to accomplish the app's primary use case, and that users will understand why this data would be required for the app to function.

Developers whose apps used these permissions prior to our announcement were notified by email and given 90 days to either remove the permissions, or submit a permissions declaration form to enable further review.

More about app reviews

We take this review process seriously and understand it's a change for many developers. We apply the same criteria to all developers, including dozens of Google apps. We added to the list of approved use cases over the last few months as we evaluated feedback from developers.

Our global teams carefully review each submission. During the review process, we consider the following:

With this change, some uses cases will no longer be allowed. However, many of the apps we reviewed with one of these permissions can rely on narrower APIs, reducing the scope of access while accomplishing similar functionality. For example, developers using SMS for account verification can alternatively use the SMS Retriever API, and apps that want to share content using SMS can prepopulate a message and trigger the default SMS app to show via intents.

Tens of thousands of developers have already resubmitted their apps to support the new policy or have submitted a form. Thank you! Developers who submitted a form received a compliance extension until March 9th.

Next steps

Over the next few weeks, we will be removing apps from the Play Store that ask for SMS or Call Log permission and have not submitted a permission declaration form. If your app is removed and you would like to have it republished, you can do one of the following in the Play Console:

Keeping our overall Android ecosystem healthy is very important, and protection of user data is vital to the long term health of all developers. We know these changes have required significant work from you and we appreciate your efforts to create innovative experiences while protecting user's privacy.

January 14th 2019, 7:23 pm

Android Studio 3.3

Android Developers Blog

Posted by Jamal Eason, Product Manager

We are excited to kick off the new year with a stable release of Android Studio 3.3 focused on refinement and quality. You can download it today from developer.android.com/studio. Based on the feedback from many of you, we have taken a step back from large features to focus on our quality fundamentals. The goal is to ensure Android Studio continues to help you stay productive in making great apps for Android. Since the last stable release, Android Studio 3.3 addresses over 200 user- reported bugs. This release also includes official support for Navigation Editor, improved incremental Java compilation when using annotation processors, C++ code lint inspections, an updated new project wizard, and usability fixes for each of the performance profilers. In addition, saving snapshots on exit for the Android emulator is 8x faster.

Android Studio 3.3 kicks off the broader quality focus area for the year, which we call Project Marble. Announced at the Android Developer Summit in November 2018, Project Marble is the Android Studio team's focus on making the fundamental features and flows of the Integrated Development Environment (IDE) rock-solid, along with refining and polishing the user-facing features that matter to you in your day-to-day app development workflows. In Project Marble, we are specifically looking at reducing the number of crashes, hangs, memory leaks, and user-impacting bugs. We are also investing in our measurement infrastructure to prevent these issues from occurring. Stay tuned for more updates and details as we progress on this initiative.

This release of Android Studio is a solid milestone for the product. If you want the latest in feature refinement and quality, then download Android Studio 3.3 today on the stable release channel. Watch and read below for some of the notable changes and enhancements that you will find in Android Studio 3.3.

Develop

Navigation Editor

Clang-Tidy Code Inspection Settings

New Project Wizard

Delete Unused Directories Dialogue

IDE User Feedback

Build

Single-Variant Project Sync

Test

Android Pie à la mode: Security & Privacy

Android Developers Blog

Posted by Vikrant Nanda and René Mayrhofer, Android Security & Privacy Team

There is no better time to talk about Android dessert releases than the holidays because who doesn't love dessert? And what is one of our favorite desserts during the holiday season? Well, pie of course.

In all seriousness, pie is a great analogy because of how the various ingredients turn into multiple layers of goodness: right from the software crust on top to the hardware layer at the bottom. Read on for a summary of security and privacy features introduced in Android Pie this year.

Strengthening Android

Making Android more secure requires a combination of hardening the platform and advancing anti-exploitation techniques.

Platform hardening

With Android Pie, we updated File-Based Encryption to support external storage media (such as, expandable storage cards). We also introduced support for metadata encryption where hardware support is present. With filesystem metadata encryption, a single key present at boot time encrypts whatever content is not encrypted by file-based encryption (such as, directory layouts, file sizes, permissions, and creation/modification times).

Android Pie also introduced a BiometricPrompt API that apps can use to provide biometric authentication dialogs (such as, fingerprint prompt) on a device in a modality-agnostic fashion. This functionality creates a standardized look, feel, and placement for the dialog. This kind of standardization gives users more confidence that they're authenticating against a trusted biometric credential checker.

New protections and test cases for the Application Sandbox help ensure all non-privileged apps targeting Android Pie (and all future releases of Android) run in stronger SELinux sandboxes. By providing per-app cryptographic authentication to the sandbox, this protection improves app separation, prevents overriding safe defaults, and (most significantly) prevents apps from making their data widely accessible.

Anti-exploitation improvements

With Android Pie, we expanded our compiler-based security mitigations, which instrument runtime operations to fail safely when undefined behavior occurs.

Control Flow Integrity (CFI) is a security mechanism that disallows changes to the original control flow graph of compiled code. In Android Pie, it has been enabled by default within the media frameworks and other security-critical components, such as for Near Field Communication (NFC) and Bluetooth protocols. We also implemented support for CFI in the Android common kernel, continuing our efforts to harden the kernel in previous Android releases.

Integer Overflow Sanitization is a security technique used to mitigate memory corruption and information disclosure vulnerabilities caused by integer operations. We've expanded our use of Integer Overflow sanitizers by enabling their use in libraries where complex untrusted input is processed or where security vulnerabilities have been reported.

Continued investment in hardware-backed security

One of the highlights of Android Pie is Android Protected Confirmation, the first major mobile OS API that leverages a hardware-protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. Developers can use this API to display a trusted UI prompt to the user, requesting approval via a physical protected input (such as, a button on the device). The resulting cryptographically signed statement allows the relying party to reaffirm that the user would like to complete a sensitive transaction through their app.

We also introduced support for a new Keystore type that provides stronger protection for private keys by leveraging tamper-resistant hardware with dedicated CPU, RAM, and flash memory. StrongBox Keymaster is an implementation of the Keymaster hardware abstraction layer (HAL) that resides in a hardware security module. This module is designed and required to have its own processor, secure storage, True Random Number Generator (TRNG), side-channel resistance, and tamper-resistant packaging.

Other Keystore features (as part of Keymaster 4) include Keyguard-bound keys, Secure Key Import, 3DES support, and version binding. Keyguard-bound keys enable use restriction so as to protect sensitive information. Secure Key Import facilitates secure key use while protecting key material from the application or operating system. You can read more about these features in our recent blog post as well as the accompanying release notes.

Enhancing user privacy

User privacy has been boosted with several behavior changes, such as limiting the access background apps have to the camera, microphone, and device sensors. New permission rules and permission groups have been created for phone calls, phone state, and Wi-Fi scans, as well as restrictions around information retrieved from Wi-Fi scans. We have also added associated MAC address randomization, so that a device can use a different network address when connecting to a Wi-Fi network.

On top of that, Android Pie added support for encrypting Android backups with the user's screen lock secret (that is, PIN, pattern, or password). By design, this means that an attacker would not be able to access a user's backed-up application data without specifically knowing their passcode. Auto backup for apps has been enhanced by providing developers a way to specify conditions under which their app's data is excluded from auto backup. For example, Android Pie introduces a new flag to determine whether a user's backup is client-side encrypted.

As part of a larger effort to move all web traffic away from cleartext (unencrypted HTTP) and towards being secured with TLS (HTTPS), we changed the defaults for Network Security Configuration to block all cleartext traffic. We're protecting users with TLS by default, unless you explicitly opt-in to cleartext for specific domains. Android Pie also adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. This protects information about IP addresses visited from being sniffed or intercepted on the network level.

We believe that the features described in this post advance the security and privacy posture of Android, but you don't have to take our word for it. Year after year our continued efforts are demonstrably resulting in better protection as evidenced by increasing exploit difficulty and independent mobile security ratings. Now go and enjoy some actual pie while we get back to preparing the next Android dessert release!

Acknowledgements: This post leveraged contributions from Chad Brubaker, Janis Danisevskis, Giles Hogben, Troy Kensinger, Ivan Lozano, Vishwath Mohan, Frank Salim, Sami Tolvanen, Lilian Young, and Shawn Willden.

December 20th 2018, 1:54 pm

Wrapping up for 2018 with Google Play and Android

Android Developers Blog

Posted by Patricia Correa, Platforms & Ecosystems

Earlier this year we highlighted some of Google Play's milestones and commitments in supporting the 1M+ developers on the Play Store, as well as those of you working on Android apps and games and looking to launch and grow your business on our platforms. We have been inspired and humbled by the achievements of app and game developers, building experiences that delight and help people everywhere, as some stories highlighted in #IMakeApps.

We continue to focus on helping you grow thriving businesses and building tools and resources to help you reach and engage more users in more places, whilst ensuring a safe and secure ecosystem. Looking to 2019, we are excited about all the things to come and seeing more developers adopt new features and update to Android P.

In the meantime let's share some of the 2018 highlights on Google Play and Android:

Building for the future

Along with Android P we have continued to help the Android developer ecosystem, launching Android Jetpack, the latest Android Studio, and Kotlin support. Developers are also now able to add rich and dynamic UI templates with Slices in places such as Google Search and Assistant, APIs for new screens support, and much more. Discover the latest from Android 9, API Level 28.

Smaller apps have higher conversion rates and our research shows that a large app size is a key driver of uninstalls. At I/O we launched a new publishing format, the Android App Bundle, helping developers to deliver smaller and more efficient apps with a simplified release process, and with features on demand - saving on average 35% in download size! On devices using Android M and above, app bundles can reduce app size even further, by automatically supporting uncompressed native libraries, thereby eliminating duplication on devices.

You can build app bundles in the Android Studio 3.2 stable release and in Unity 2018.3 beta, and upload larger bundles with installed APK sizes of up to 500MB without using expansion files, through an early access feature soon to be available to all developers.

Richer experiences and discovery

Discovery of your apps and games is important, so we launched Google Play Instant and increased the size limit to 10MB to enable TRY NOW on the Play Store, and removed the URL requirement for Instant apps. Android Studio 3.3 beta release, lets you publish a single app bundle and classify it or a particular module to be instant enabled (without maintaining separate code).

For game developers, Unity introduced the Google Play Instant plug-in and instant app support is built into the new Cocos Creator. Our app pre-registration program, has seen nearly 250 million app pre-registrations, helping drive app downloads through richer discovery.

Optimizing for quality and performance

Android vitals are now more actionable, with a dashboard highlighting core vitals, peer benchmarks, start-up time and permission denials vitals, anomaly detection and alerts, and linking pre-launch reports - all so that you can better optimize and prioritize issues for improved quality and performance.

There are more opportunities to get feedback and fix issues before launch. The Google Play Console expanded the functionality of automated device testing with a pre-launch report for games, and the launch of the internal and closed test tracks lets you push your app to up to 100 internal testers, before releasing them to production.

Insights for your business, now and in the long term

Metrics are critical to optimize your business and we've added new customizable tools in the Play Console, with downloadable reports to help you evaluate core metrics. Including cumulative data, 30-day rolling averages, and roll-ups for different time periods to better match the cadence of your business.

You can now configure the statistics report to show how your instant apps are performing, analyze different dimensions and identify how many install the final app on their device. The acquisition report shows users discovery journey through to conversion - with average revenue per user and retention benchmarks against similar apps. You can also find the best performing search terms for your store listing with organic breakdown - helping to optimize efforts to grow and retain a valuable audience.

Increasingly developers are adopting subscriptions as their core monetization model. The dedicated new subscriptions center means you can easily change subscription prices, offer partial refunds for in-app products and subscriptions, and also make plan changes in Play Billing Library version 1.2. Learn how to keep subscribers engaged; users can pause plans, giving you more control with order management and the cancellation survey.

Discover how to use all the new features and best practices on the Academy for App Success, our interactive free e-learning platform, offering bite-sized courses that help you make the most of Play Console and improve your app quality.

Make sure you follow @googleplaydev and sign up to our newsletter to stay ahead of all our updates in 2019! We hope these features and tools will enable us to continue a successful partnership with you in the New Year - follow our countdown for a daily highlight. From all of us at Google Play - happy holidays.

How useful did you find this blog post?

December 18th 2018, 1:46 pm

In reviews we trust — Making Google Play ratings and reviews more trustworthy

Android Developers Blog

Posted by Fei Ye, Software Engineer and Kazushi Nagayama, Ninja Spamologist

Google Play ratings and reviews are extremely important in helping users decide which apps to install. Unfortunately, fake and misleading reviews can undermine users' trust in those ratings. User trust is a top priority for us at Google Play, and we are continuously working to make sure that the ratings and reviews shown in our store are not being manipulated.

There are various ways in which ratings and reviews may violate our developer guidelines:

When we see these, we take action on the app itself, as well as the review or rating in question.

In 2018, the Google Play Trust & Safety teams deployed a system that combines human intelligence with machine learning to detect and enforce policy violations in ratings and reviews. A team of engineers and analysts closely monitor and study suspicious activities in Play's ratings and reviews, and improve the model's precision and recall on a regular basis. We also regularly ask skilled reviewers to check the decisions made by our models for quality assurance.

It's a big job. To give you a sense of the volume we manage, here are some numbers from a recent week:

Our team can do a lot, but we need your help to keep Google Play a safe and trusted place for apps and games.

If you're a developer, you can help us by doing the following:

Example of a violation: incentivized ratings is not allowed

If you're a user, you can follow these simple guidelines as well:

Finally, if you find bad ratings and reviews on Google Play, help us improve by sending your feedback! Users can mark the review as "Spam" and developers can submit feedback through the Play Console.

Tooltip to flag the review as Spam.

Thanks for helping us keep Google Play a safe and trusted place to discover some of the world's best apps and games.

How useful did you find this blog post?

December 17th 2018, 3:44 pm

More visibility into the Android Open Source Project

Android Developers Blog

Posted by Jeff Bailey, AOSP Team

AOSP has been around for more than 10 years and visibility into the project has often been restricted to the Android Team and Partners. A lot of that has been rooted in business needs: we want to have fun things to show off at launches and the code wasn't factored in a way that let us do more in the open.

At the Android Developer Summit last month, we demoed GSI running on a number of partner devices, enabled through Project Treble. The work done to make that happen has provided the separation needed, and has also made it easier to work with our partners to upstream fixes for Android into AOSP. As a result of this, more than 40% of the commits to our git repository came in through our open source tree in Q3 of this year.

Publishing Android's Continuous Integration Dashboard

In order to support the developers working directly in AOSP and our partners upstreaming changes, we have enabled more than 8000 tests in presubmit -- tests that are run before the code is checked in -- and are working to add other continuous testing like the Compatibility Test Suite which ensures that our AOSP trees are in a continuously releasable state. Today we are excited to open this up for you through https://ci.android.com/.

On this dashboard, across the top are the targets that we are building, down the left are the revisions. As we add more targets (such as GSI), they will appear here. Each square in the table provides access to the build artifacts. An anchor on the left provides a permanent URL for that revision. Find out more at https://source.android.com/setup/build/dashboard.

Our DroidCop team (similar to Chromium's Tree Sherrifs) watches this dashboard and works with developers to ensure the health of the tree. This is just the start for us and we are building on this tool to add more in the coming months.

I'd like to thank the Android Engineering Productivity Team for embracing this and I'm excited for us to take this step! I'd love to hear how you use this. Contact me at @jeffbailey-aosp on Twitter, jeffbailey+aosp@google.com, or tag /u/jeffbailey in a post to reddit.com/r/androiddev.

December 14th 2018, 1:08 pm

Notifications from the Twitter app are easier on your battery

Android Developers Blog

This blogpost is a collaboration between Google and Twitter. Authored by Jingwei Hao with support from César Puerta, Fred Lohner from Twitter, and developed with Jingyu Shi from Google. Push notifications are an important way to keep Twitter users informed about what's happening. However, they can be a significant and often overlooked source of battery drain. For example: high priority notifications can wake a phone from Doze, and fetching data upon push notification delivery via the network can drain the battery quickly.
As app developers at Twitter, we know that battery life is an important aspect of the mobile experience for our users. Over time we've taken several steps to optimize our app to work with the power saving features, particularly around push notifications. In this article, we'll share what we did to save battery life on our users' devices in the hope this will help other developers optimize their apps as well.

Firebase Cloud Messaging migration

Earlier this year, we upgraded our notifications messaging library to the next evolution of GCM: Firebase Cloud Messaging (FCM). This gave us the ability to use the latest APIs and get access to the additional features Firebase has to offer.
This was a very unique migration for us for multiple reasons:
  • Push notifications are an important part of Twitter's mobile engagement strategy. Notifications help our users stay informed and connected with the world, and helped a man get his nuggs. Therefore, we couldn't afford having unreliable delivery of notifications during and after the migration, which would negatively impact the platform.
  • Since it's not possible to support both GCM and FCM in the application, we were not able to use typical A/B testing techniques during the migration.
We mitigated the risks by thoroughly testing across dogfood, alpha, and beta users for both migration and rollback paths, with real time metrics monitoring, and a staged app roll out. During the migration, we were pleased to find that replacing GCM with FCM worked smoothly in all our use cases.
Migrating to FCM proved valuable to us when testing against the power saving features on Android. With APIs like: getPriority() and getOriginalPriority(), FCM gave us insight into if the priority of FCM messages are downgraded.

Set the right FCM message priority

On the backend at Twitter, we always try to make sure that notifications are assigned with the appropriate priority, making sure that high priority FCM messages are only used to generate a user visible notification. In fact, a very small fraction of notifications we send are classified as high priority.
App Standby Buckets was introduced in Android 9 Pie, this feature impose restrictions on the number of high priority messages the app will receive based on which bucket the app belongs to. As a result, high priority messages should be reserved for the notifications users are more likely to interact with. Using high priority FCM messages for actions which do not involve user interactions can lead to negative consequences, for example: once an app exhausts its app standby bucket quota, the following genuinely urgent FCM messages will be downgraded to normal priority and delayed when device is in Doze.
To understand how our app performs with App Standby Buckets, we gathered statistics on the notification priorities at both send and delivery time for the Twitter App:
  • During our test, we observed that none of the high priority FCM messages were downgraded, especially for the 2% of devices which are bucketed in the frequent or lower bucket when the notifications are delivered. This is particularly worth noting since the system could potentially impose restrictions on high priority FCM messages for devices in low active buckets.
  • 86% of the devices were bucketed in the active bucket when high priority FCM messages are delivered. This is another positive signal that our priority assignment of the messages is consistent with the use pattern from the users.
This was very encouraging to see, as it means all users would receive the important notifications without any delay, and this is true no matter the app is considered active or not by the Android system. This also confirms that we are correctly categorizing high priority FCM messages based on our users usage engagement patterns.

Avoid follow-up data prefetch for notifications

Prefetching data is a popular practice to enrich the user experience around notifications. This entails including a piece of metadata within the payload of a notification. When the notification is delivered, the app leverages the payload data to start one or multiple network calls to download more data before the rendering of the notification. FCM payload has a 4KB max limit, and when more data is needed to create rich notifications, this prefetch practice is used. But doing this has a trade-off, and will increase both device power consumption and notification latency. At Twitter, among all the types of notifications we push to our users, there's only one type which does prefetching which makes up to less than 1% of the volume. In addition, in the cases where data prefetching for notification is unavoidable, it should be scheduled with JobScheduler or WorkManager tasks in order to avoid issues with the background execution limits in Oreo+.

Create notification channels

In addition, Notification Channels were introduced in the Android 8 Oreo release. We designed the importance level of the channels with multiple factors in mind, user experience being the most important and power-saving being another. Currently, Twitter for Android has nine notification channels, among which only direct messaging, emergency, and security are designed as high importance, leaving most of the channels with a lower importance level, making it less intrusive.

Summary

We set out to improve our user experience by migrating to FCM, prioritizing notifications cautiously, limiting prefetching, and carefully designing notification channels. We were glad to find that these changes had a positive impact on battery performance and enabled our app to take advantage of the power-saving features introduced in recent versions of Android.
Designing for power optimization in a large evolving application is a complex and ongoing process, particularly as the Android platform grows and provides more granular controls. At Twitter we strongly believe in continuous refinement to improve application performance and resource consumption. We hope this discussion is useful in your own quest to optimize your app's performance and use of resources 💙

December 13th 2018, 8:12 pm

New Keystore features keep your slice of Android Pie a little safer

Android Developers Blog

Posted by Brian Claire Young and Shawn Willden, Android Security; and Frank Salim, Google Pay

New Android Pie Keystore Features

The Android Keystore provides application developers with a set of cryptographic tools that are designed to secure their users' data. Keystore moves the cryptographic primitives available in software libraries out of the Android OS and into secure hardware. Keys are protected and used only within the secure hardware to protect application secrets from various forms of attacks. Keystore gives applications the ability to specify restrictions on how and when the keys can be used.

Android Pie introduces new capabilities to Keystore. We will be discussing two of these new capabilities in this post. The first enables restrictions on key use so as to protect sensitive information. The second facilitates secure key use while protecting key material from the application or operating system.

Keyguard-bound keys

There are times when a mobile application receives data but doesn't need to immediately access it if the user is not currently using the device. Sensitive information sent to an application while the device screen is locked must remain secure until the user wants access to it. Android Pie addresses this by introducing keyguard-bound cryptographic keys. When the screen is locked, these keys can be used in encryption or verification operations, but are unavailable for decryption or signing. If the device is currently locked with a PIN, pattern, or password, any attempt to use these keys will result in an invalid operation. Keyguard-bound keys protect the user's data while the device is locked, and only available when the user needs it.

Keyguard binding and authentication binding both function in similar ways, except with one important difference. Keyguard binding ties the availability of keys directly to the screen lock state while authentication binding uses a constant timeout. With keyguard binding, the keys become unavailable as soon as the device is locked and are only made available again when the user unlocks the device.

It is worth noting that keyguard binding is enforced by the operating system, not the secure hardware. This is because the secure hardware has no way to know when the screen is locked. Hardware-enforced Android Keystore protection features like authentication binding, can be combined with keyguard binding for a higher level of security. Furthermore, since keyguard binding is an operating system feature, it's available to any device running Android Pie.

Keys for any algorithm supported by the device can be keyguard-bound. To generate or import a key as keyguard-bound, call setUnlockedDeviceRequired(true) on the KeyGenParameterSpec or KeyProtection builder object at key generation or import.

Secure Key Import

Secure Key Import is a new feature in Android Pie that allows applications to provision existing keys into Keystore in a more secure manner. The origin of the key, a remote server that could be sitting in an on-premise data center or in the cloud, encrypts the secure key using a public wrapping key from the user's device. The encrypted key in the SecureKeyWrapper format, which also contains a description of the ways the imported key is allowed to be used, can only be decrypted in the Keystore hardware belonging to the specific device that generated the wrapping key. Keys are encrypted in transit and remain opaque to the application and operating system, meaning they're only available inside the secure hardware into which they are imported.

Secure Key Import is useful in scenarios where an application intends to share a secret key with an Android device, but wants to prevent the key from being intercepted or from leaving the device. Google Pay uses Secure Key Import to provision some keys on Pixel 3 phones, to prevent the keys from being intercepted or extracted from memory. There are also a variety of enterprise use cases such as S/MIME encryption keys being recovered from a Certificate Authorities escrow so that the same key can be used to decrypt emails on multiple devices.

To take advantage of this feature, please review this training article. Please note that Secure Key Import is a secure hardware feature, and is therefore only available on select Android Pie devices. To find out if the device supports it, applications can generate a KeyPair with PURPOSE_WRAP_KEY.

December 12th 2018, 1:46 pm

Effective foreground services on Android

Android Developers Blog

Posted by Keith Smyth

This is the fourth in a series of blog posts in which outline strategies and guidance in Android with regard to power.

A process is not forever

Android is a mobile operating system designed to work with constrained memory and battery. For this reason, a typical Android application can have its process killed by the system to recover memory. The process being killed is chosen based on a ranking system of how important that process is to the user at the time. Here, in descending order, is the ranking of each class of process. The higher the rank, the less likely that process is to be killed.

Native Native Linux daemon processes are responsible for running everything (including the process killer itself).
System The system_server process, which is responsible for maintaining this list.
Persistent apps Persistent apps like Phone, Wi-Fi, and Bluetooth are crucial to keeping your device connected and able to provide its most basic features.
Foreground app A foregrounded / top (user visible) app is the app a user is currently using.
Perceptible apps These are apps that the user can perceive are running. For example an app with a foreground service playing audio, or an app set as the preferred voice interaction service will be bound to the system_server, effectively promoting it to Perceptible level.
Service Background services like download manager and sync manager.
Home The Launcher app containing desktop wallpaper
Previous app The previous foreground app the user was using. The previous app lives above the cached apps as it's the most likely app the user will switch to next.
Cached apps These are the remaining apps that have been opened by the user, and then backgrounded. They will be killed first to recover memory, and have the most restrictions applied to them on modern releases. You can read about them on the Behavior Changes pages for Nougat, Oreo and Pie.


The foreground service

There is nothing wrong with becoming a cached app: Sharing the user's device is part of the lifecycle that every app developer must accept to keep a happy ecosystem. On a device with a dead battery, 100% of the apps go unused. And an app blamed for killing the battery could even be uninstalled.

However, there are valid scenarios to promote your app to the foreground: The prerequisites for using a foreground service are that your app is executing a task that is immediate, important (must complete), is perceptible to the user (most often because it was started by the user), and must have a well defined start and finish. If a task in your app meets these criteria, then it can be promoted to the foreground until the task is complete.

There are some guidelines around creating and managing foreground services. For all API levels, a persistent notification with at least PRIORITY_LOW must be shown while the service is created. When targeting API 26+ you will also need to set the notification channel to at least IMPORTANCE_LOW. The notification must have a way for the user to cancel the work, this cancellation can be tied to the action itself: for example, stopping a music track can also stop the music-playback service. Last, the title and description of the foreground service notification must show an accurate description of what the foreground service is doing.

To read more about foreground services, including several important updates in recent releases, see Running a service in the foreground

Foreground service use cases

Some good example usages of foreground services are playing music, completing a purchase transaction, high-accuracy location tracking for exercise, and logging sensor data for sleep. The user will initiate all of these activities, they must happen immediately, have an explicit beginning and end, and all can be cancelled by the user at any time.

Another good use case for a foreground service is to ensure that critical, immediate tasks (e.g. saving a photo, sending a message, processing a purchase) are completed if the user switches away from the application and starts a new one. If the device is under high memory pressure it could kill the previous app while it is still processing causing data loss or unexpected behavior. An elegantly written app will detect being backgrounded and respond by promoting its short, critical task to the foreground to complete.

If you feel you need your foreground service to stay alive permanently, then this is an indicator that a foreground service is not the right answer. Many alternatives exist to both meet the requirements of your use case, and be the most efficient with power.

Alternatives

Passive location tracking is a bad use case for foreground services. If the user has consented to being tracked, use the FusedLocationProvider API to receive bundled location updates at longer intervals, or use the geofencing API to be efficiently notified when a user enters or leaves a specified area. Read more about how to optimize location for battery.

If you wish to pair with a Bluetooth companion device, use CompanionDeviceManager. For reconnecting to the device, BluetoothLeScanner has a startScan method that takes a PendingIntent that will fire when a narrow filter is met.

If your app has work that must be done, but does not have to happen immediately: WorkManager or JobScheduler will schedule the work for the best time for the entire system. If the work must be started immediately, but then can stop if the user stops using the app, we recommend ThreadPools or Kotlin Coroutines.

DownloadManager facilitates handling long running downloads in the background. It will even handle retries over poor connections and system reboots for you.

If you believe you have a use case that isn't handled let us know!

Conclusion

Used correctly, the foreground service is a great way to tell Android that your app is doing something important to the user. Making the right decision on which tool to use remains the best way to provide a premium experience on Android for all users. Use the community and Google to help with these important decisions, and always respect the user first.

December 11th 2018, 5:52 pm

Google Play services discontinuing updates for API levels 14 and 15

Android Developers Blog

Posted by Sam Spencer, Technical Program Manager, Google Play

The Android Ice Cream Sandwich (ICS) platform is seven years old and the active device count has been below 1% for some time. Consequently, we are deprecating support for ICS in future releases of Google Play services. For devices running ICS, the Google Play Store will no longer update Play Services APK beyond version 14.7.99.

What does this mean as an Application developer:

The Google Play services SDK contains the interfaces to the functionality provided by the Google Play services APK, running as background services. The functionality required by the current, released SDK versions is already present on ICS devices with Google Play services and will continue to work without change.

With the SDK version changes earlier this year, each library can be independently released and may update its own minSdkVersion. Individual libraries are not required to change based on this deprecation. Newer SDK components may continue to support API levels 14 and 15 but many will update to require the higher API level. For applications that support API level 16 or greater, you will not need to make any changes to your build. For applications that support API levels 14 or 15, you may continue to build and publish your app to devices running ICS, but you will encounter build errors when updating to newer SDK versions. The error will look like this:

Error:Execution failed for task ':app:processDebugManifest'.
> Manifest merger failed : uses-sdk:minSdkVersion 14 cannot be smaller than version 16 declared in library [com.google.android.gms:play-services-FOO:16.X.YY]
        Suggestion: use tools:overrideLibrary="com.google.android.gms:play_services" to force usage

Unfortunately, the stated suggestion will not help you successfully run your app on older devices. In order to use the newer SDK, you will need to use one of the following options:

1. Target API level 16 as the minimum supported API level.

This is the recommended course of action. To discontinue support for API levels that will no longer receive Google Play services updates, simply increase the minSdkVersion value in your app's build.gradle to at least 16. If you update your app in this way and publish it to the Play Store, users of devices with less than that level of support will not be able to see or download the update. However, they will still be able to download and use the most recently published version of the app that does target their device.

A very small percentage of all Android devices are using API levels less than 16. You can read more about the current distribution of Android devices. We believe that many of these old devices are not actively being used.

If your app still has a significant number of users on older devices, you can use multiple APK support in Google Play to deliver an APK that uses Google Play services 14.7.99. This is described below.

2. Build multiple APKs to support devices with an API level less than 16.

Along with some configuration and code management, you can build multiple APKs that support different minimum API levels, with different versions of Google Play services. You can accomplish this with build variants in Gradle. First, define build flavors for legacy and newer versions of your app. For example, in your build.gradle, define two different product flavors, with two different compile dependencies for the stand-in example play-services-FOO component:

productFlavors {
    legacy {
        minSdkVersion 14
        versionCode 1401  // Min API level 14, v01
    }
    current {
        minSdkVersion 16
        versionCode 1601  // Min API level 16, v01
    }
}

dependencies {
    legacyCompile 'com.google.android.gms:play-services-FOO:16.0.0'
    currentCompile 'com.google.android.gms:play-services-FOO:17.0.0'
}

In the above situation, there are two product flavors being built against two different versions of play-services-FOO. This will work fine if only APIs are called that are available in the 16.0.0 library. If you need to call newer APIs made available with 17.0.0, you will have to create your own compatibility library for the newer API calls so that they are only built into the version of the application that can use them:

  1. Declare a Java interface that exposes the higher-level functionality you want to perform that is only available in current versions of Play services.
  2. Build two Android libraries that implement that interface. The "current" implementation should call the newer APIs as desired. The "legacy" implementation should no-op or otherwise act as desired with older versions of Play services. The interface should be added to both libraries.
  3. Conditionally compile each library into the app using "legacyCompile" and "currentCompile" dependencies as illustrated for play-services-FOO above.
  4. In the app's code, call through to the compatibility library whenever newer Play APIs are required.

After building a release APK for each flavor, you then publish them both to the Play Store, and the device will update with the most appropriate version for that device. Read more about multiple APK support in the Play Store.

December 6th 2018, 6:22 pm

Improve media and messaging app integrations with Android Auto

Android Developers Blog

Posted by John Posavatz, Product Manager, Android Auto

At Google I/O this past May, we provided a sneak preview of several new media and messaging features for Android Auto. We are happy to announce that these features are now ready in our latest version of Android Auto, and we encourage you to update your Android Auto implementations to take advantage of them!

New Media Features

Several new features make it easier for users to find the media content that they're looking for. Check out the full documentation for them on our Android developer site.

Search results

After performing an Assistant-based search (e.g. "OK Google, play [artist / album / playlist / book / song / genre]"), music auto-plays as before, and in addition you can now provide your own list of categorized results. First, you'll need to declare support for onSearch() in your MediaBrowserServiceCompat implementation, and then override it. Android Auto forwards a user's search terms to this method whenever a user invokes the "Show more results" affordance.

Android Auto calls onSearch with the same Bundle of extras as the one defined for Android Auto's playFromSearch() calls. Unlike playFromSearch(), onSearch() includes a Result> that can be used to return multiple MediaItems back to Android Auto for display.

You can then categorize search results using title items. For example, music apps may include categories such as "Artists", "Albums" and "Songs".

Improved browse

Content has been "brought forward" out of the drawer, and now resides within the main view of the media screen. Within this new layout, you now have an option of either displaying your browse trees as a simple list, or you can optionally display large grids of album art / icons. We recommend using lists wherever the text description is most useful in describing content (e.g. a list of track names or podcast episodes), whereas the larger grid view is most appropriate where the album / icon aids in quick identification and selection.

To start applying content styles, you should set a global default for how your media items are displayed by applying specific constants in the BrowserRoot extras bundle returned by the onGetRoot() function. Android Auto reads the extras associated with each item in the browse tree and looks for specific constants (detailed in our documentation), and then use the presence/value of each key to add the appropriate indicator.

In order to change the default behavior for a specific node, the Content Style API supports overriding the default global hint for any browsable node's children. The same extras as above can be supplied as extras in the MediaDescription. If these extras are present, then the children of that browsable node will have the new Content Style hint.

Finally, you can organize content using title items to group media in a list. To do this, every media item in the group needs to declare an extra in their media description with the same string value, which you can localize. This value is used as the group title. You also need to pass the media items together and in the order you want them displayed.

Additional metadata icons

In both browsing and playback views, you are now able to show icons next to media items which have explicit language, have been downloaded to the user's device, and which are unplayed / partially played / completed (e.g. for audiobooks and podcasts).

Android Auto inspects extras for each item in the browse tree and looks for the specific keys for the indicators, and then uses the presence/value of each key to add the appropriate indicator.

You should add these extras to content returned by your MediaBrowse Service. "Explicit" and "Downloaded" are boolean extras (set to true to show the indicator), while "Completion State" is an integer extra set to the appropriate value. Apps should create an extras bundle that includes one or more of these keys and pass that to MediaDescription.Builder.setExtras().

Updates to Messaging

We are deprecating CarExtender, in favor of the more robust and broadly beneficial MessagingStyle API. Migrating to MessagingStyle is very straightforward, and not only extends messaging support beyond Android Auto (for example, to Google Assistant), but it also brings the following immediate benefits for Android Auto:

Group messaging

Formerly, Android Auto's support for group messages was lacking - in most cases, notifications were never shown. MessagingStyle addresses that, so that your users never miss a message.

MMS / RCS support

Android Auto only supports SMS natively through the system SMS broadcast. MessagingStyle allows support for SMS apps that themselves support RCS and MMS.

To enable your app to provide messaging service for Auto devices, your app must do the following:

  1. Build and send NotificationCompat.MessagingStyle objects that contain reply and mark-as-read Action objects.
  2. Handle replying and marking a conversation as read with a Service.
  3. Configure your manifest to indicate the app supports Android Auto.

By moving to MessagingStyle, your app will not only gain automotive support, but also gain a richer mobile notification experience including inline replying, image preview, and conversation history; all within the notification shade.

An in-depth guide to implementing (or updating to) MessagingStyle can be found in our online developer documentation.

Thanks for continuing to support Android Auto!

December 5th 2018, 6:34 pm

Android codelab courses are here!

Android Developers Blog

Jocelyn Becker, Senior Program Manager, Google Developer Training

The Google Developers Training team recently published an updated version of our Android Developer Fundamentals course as a series of Google codelabs.

Codelabs made their debut as onsite tutorials at Google I/O in 2015, and have skyrocketed in popularity as a way for developers to learn how to use Google technologies, APIs, and SDKs. A codelab is a short, self-contained tutorial that walks you through how to do a specific task. More than 2 million users have worked through Google codelabs this year.

Our Android courses were originally intended as classroom-based courses. However, we found that many people work through the courses on their own, outside of formal teaching programs. So, when we updated the Android Developer Fundamentals course, in addition to supporting classroom-based learning, we made the material available as a sequential series of codelabs.

Android Developer Fundamentals course

The updated Android Developer Fundamentals (V2) course includes lessons on using the Room database and other Architecture Components. All the apps have been updated to reflect that the Empty Activity template in Android Studio uses ConstraintLayout, and we've updated all apps to a later version of Android Studio. For more details on the differences, see the release notes.

Advanced Android course

We've also re-published the Advanced Android Developer course as a series of codelabs. This course provides step-by-step instructions for learning about more advanced topics, and adding features to your app to improve user engagement and delight. Learn how to add maps to your apps, create custom views, use SurfaceView to draw directly to the screen, and much more.

Teaching materials for both courses

If you want to teach either course in the classroom, or use it as the basis of a study jam, the complete package is still available, including slide decks, source code in GitHub, and concept guides, in addition to the step-by-step codelabs.

December 4th 2018, 2:09 pm

Celebrating the developers behind the best apps and games of 2018

Android Developers Blog

Posted by Purnima Kochikar, Director, Business Development, Games & Applications

Today, we announced our annual Best of 2018 list, highlighting the best content on Google Play. But ever wonder about the makers behind your favorite apps and games like PUBG MOBILE or Tasty? Well, we wanted to take a moment to celebrate the developers that brought to life the best U.S. apps and games of 2018. And this year was jam packed with entertainment — all thanks to the developers who pushed the envelope and sparked our imaginations.

Check out the full rundown of the developers behind the best apps and games of 2018 on Google Play:

Best App of 2018

Most Entertaining Apps

Best Self Improvement Apps

Best Daily Helper Apps

Best Hidden Gem Apps

Best Game of 2018

Most Competitive Games

Most Innovative Games

Best Indie Games

Most Casual Games

December 3rd 2018, 1:06 pm

SDK Developers: sign up to stay up to date with latest tips, news and updates

Android Developers Blog

Posted by Parul Soi, Strategic Partner Development Manager, Google Play

Android is fortunate to have an incredibly rich ecosystem of SDKs and libraries to help developers build great apps more efficiently. These SDKs can range from developer tools that simplify complicated feature development to end-to-end services such as analytics, attribution, engagement, etc. All of these tools can help Android developers reduce cost and ship products faster.

For the past few months, various teams at Google have been working together on new initiatives to expand the resources and support we offer for the developers of these tools. Today, SDK developers can sign up and register their SDK with us to receive updates that will keep you informed about Google Play policy changes, updates to the platform, and other useful information.

Our goal is to provide you with whatever you need to better serve your technical and business goals in helping your partners create better apps or games. Going forward we will be sharing further resources to help SDK developers, so stay tuned for more updates.

If you develop an SDK or library for Android, make sure you sign up and register your SDK to receive updates about the latest tools and information to help serve customers better. And, if you're an app developer, share this blogpost with the developers of the SDKs that you use!

How useful did you find this blog post?

November 30th 2018, 1:52 pm

Fast Pair Update

Android Developers Blog

Posted by Seang Chau (VP Engineering)

Last year we announced Fast Pair, a set of specs that make it easier to connect Bluetooth headsets and speakers to Android devices.

Today, we're making it easier for people to connect Fast Pair compatible accessories to devices associated with the same Google Account. Fast Pair will connect accessories to a user's current and future Android phones (6.0+), and we're adding support for Chromebooks in 2019.

Fast Pair provides stress-free Bluetooth pairing for your Android phone.

We have been working closely with dozens of manufacturers, many of which are bringing new Fast Pair compatible devices to market over the coming months. This includes Jaybird, who is already selling the Tarah Wireless Sport Headphones, as well as upcoming products from prominent brands such as Anker SoundCore, Bose, and many more.

The Jaybird Tarah, Fast Pair compatible sport headphones already available in the market.

We also want to make it easy for manufacturers to ship compatible products with minimal additional engineering effort. We collaborated with industry leading Bluetooth audio companies such as Airoha Technology Corp., BES and Qualcomm Technologies International, Ltd. (QTIL) to add native Fast Pair support to their software development kits.

If you are a manufacturer interested in creating Fast Pair compatible Bluetooth devices, just head to our Nearby Devices console to register your product and validate that it has correctly implemented the Fast Pair specs.

November 27th 2018, 1:01 pm

Wear OS by Google: final API 28 emulator with new redesigned UI

Android Developers Blog

Posted by Hoi Lam, Lead Developer Advocate

Today, we are launching the final API 28 emulator image for developers. This image will also contain the UI redesign we announced in August. You should verify that your app's notification works well with the new notification steam, and that your apps work well against changes previously announced for API 28.

What's new in API 28?

Here are the highlights of the API 28 emulator:

Please note that changes related to the new notification stream are being rolled out to devices supporting API 25 and up. You can test how your notification will behave now, before roll-out is complete, by using the API 28 emulator image.

Keep your feedback coming

Just because we are now in release build does not mean that our work stops here. Please continue to submit all bug / enhancement requests via the Wear OS by Google issue tracker.

Finally, we are grateful for all of your valuable feedback during the developer preview. It played an important role in our decision making process - especially concerning App Standby Buckets. Thank you!

November 20th 2018, 1:00 pm

Getting screen brightness right for every user

Android Developers Blog

Posted by Ben Murdoch, Software Engineer and Michael Wright, Android Framework Engineer

The screen on a mobile device is critical to the user experience. The improved Adaptive Brightness feature in Android P automatically manages the display to match your preferences for brightness level so you get the best experience, whatever the current lighting environment.

Screen brightness in Android is managed via Quick Settings or via the settings app

(Settings → Display → Brightness Level).

In Android Pie, Adaptive Brightness is enabled by default (Settings → Display → Adaptive Brightness).

While enabled, Android automatically selects a screen brightness that's suitable for the user's current ambient light conditions. Prior to Android Pie, the brightness slider didn't represent an absolute screen brightness level, but a global adjustment factor for boosting or reducing the device manufacturer's preset screen brightness curve across all ambient light levels:

* Setting the slider to center resulted in the device using the preset.

* Setting the slider to the left of center applied a negative scale factor, making the screen dimmer than the preset.

* Setting the slider to the right of center applied a positive scale factor, making the screen brighter than the preset.

So, under low ambient light conditions, you might prefer a brighter screen than the preset level and move the brightness slider up accordingly. But, because that adjustment would boost the brightness at all ambient light levels, you might find yourself needing to move the brightness slider back down in brighter ambient light. And so on, back and forth.

To improve this experience, we've introduced two important changes to screen brightness in Android Pie:

  1. Better slider control
  2. Personalization of the brightness level

Better slider control

The slider control now represents absolute screen brightness rather than the global adjustment factor. That means that you may see it move on its own while Adaptive Brightness is on. This is expected behavior!

Humans perceive brightness on a logarithmic rather than linear scale1. That means changes in screen brightness are much more noticeable when the screen is dark versus bright. To match this difference in perception, we updated the brightness slider UI in the notification shade and System Settings app to work on a more human-like scale. This means you may need to move the slider farther to the right than you did on previous versions of Android for the same absolute screen brightness, and that when setting a dark screen brightness you have more precise control over exactly which brightness to set.

Personalization of screen brightness

Prior to Android P, when developing a new Android device the device manufacturer would determine a baseline mapping from ambient brightness to screen brightness based on the display manufacturer's recommendation and a bit of experimentation. All users of that device would receive the same baseline mapping and, while using the device, move the brightness slider around to set their global adjustment factor. To determine the final screen brightness, the system would first look at the room brightness and the baseline mapping to find the default screen brightness for that situation, and then apply the global adjustment factor.

What we found is that in many cases this global adjustment factor didn't adequately capture personal preferences - that is, users tended to change the slider often for new lighting environments.

For Android Pie we worked with researchers from DeepMind to build a machine learning model that will observe the interactions that a user makes with the screen brightness slider, and train on-device to personalise the mapping of ambient light level to screen brightness.

This means that Android will learn what screen brightness is comfortable for a user in a given lighting environment. The user teaches it by manually adjusting the slider, and, as the software trains over time, the user should need to make fewer manual adjustments. In testing the feature, we've observed that after a week almost half our test users are making fewer manual adjustments while the total number of slider interactions across all internal test users was reduced by over 10%. The model that we've developed is updatable and will be tuned based on real world usage now that Android Pie has been released. This means that the model will continue to get better over time.

We believe that screen brightness is one of those things that should just work, and these changes in Android Pie are a step towards realizing that. For the best performance no matter where you are models run directly on the device rather than the cloud, and train overnight while the device charges.

The improved Adaptive Brightness feature is now available on Pixel devices and we are working with our OEM partners now to incorporate Adaptive Brightness into Android Pie builds for their devices.

Notes

November 19th 2018, 2:26 pm

Combating Potentially Harmful Applications with Machine Learning at Google: Datasets and Models

Android Developers Blog

Posted by Mo Yu, Android Security & Privacy Team

In a previous blog post, we talked about using machine learning to combat Potentially Harmful Applications (PHAs). This blog post covers how Google uses machine learning techniques to detect and classify PHAs. We'll discuss the challenges in the PHA detection space, including the scale of data, the correct identification of PHA behaviors, and the evolution of PHA families. Next, we will introduce two of the datasets that make the training and implementation of machine learning models possible, such as app analysis data and Google Play data. Finally, we will present some of the approaches we use, including logistic regression and deep neural networks.

Using machine learning to scale

Detecting PHAs is challenging and requires a lot of resources. Our security experts need to understand how apps interact with the system and the user, analyze complex signals to find PHA behavior, and evolve their tactics to stay ahead of PHA authors. Every day, Google Play Protect (GPP) analyzes over half a million apps, which makes a lot of new data for our security experts to process.

Leveraging machine learning helps us detect PHAs faster and at a larger scale. We can detect more PHAs just by adding additional computing resources. In many cases, machine learning can find PHA signals in the training data without human intervention. Sometimes, those signals are different than signals found by security experts. Machine learning can take better advantage of this data, and discover hidden relationships between signals more effectively.

There are two major parts of Google Play Protect's machine learning protections: the data and the machine learning models.

Data sources

The quality and quantity of the data used to create a model are crucial to the success of the system. For the purpose of PHA detection and classification, our system mainly uses two anonymous data sources: data from analyzing apps and data from how users experience apps.

App data

Google Play Protect analyzes every app that it can find on the internet. We created a dataset by decomposing each app's APK and extracting PHA signals with deep analysis. We execute various processes on each app to find particular features and behaviors that are relevant to the PHA categories in scope (for example, SMS fraud, phishing, privilege escalation). Static analysis examines the different resources inside an APK file while dynamic analysis checks the behavior of the app when it's actually running. These two approaches complement each other. For example, dynamic analysis requires the execution of the app regardless of how obfuscated its code is (obfuscation hinders static analysis), and static analysis can help detect cloaking attempts in the code that may in practice bypass dynamic analysis-based detection. In the end, this analysis produces information about the app's characteristics, which serve as a fundamental data source for machine learning algorithms.

Google Play data

In addition to analyzing each app, we also try to understand how users perceive that app. User feedback (such as the number of installs, uninstalls, user ratings, and comments) collected from Google Play can help us identify problematic apps. Similarly, information about the developer (such as the certificates they use and their history of published apps) contribute valuable knowledge that can be used to identify PHAs. All these metrics are generated when developers submit a new app (or new version of an app) and by millions of Google Play users every day. This information helps us to understand the quality, behavior, and purpose of an app so that we can identify new PHA behaviors or identify similar apps.

In general, our data sources yield raw signals, which then need to be transformed into machine learning features for use by our algorithms. Some signals, such as the permissions that an app requests, have a clear semantic meaning and can be directly used. In other cases, we need to engineer our data to make new, more powerful features. For example, we can aggregate the ratings of all apps that a particular developer owns, so we can calculate a rating per developer and use it to validate future apps. We also employ several techniques to focus in on interesting data.To create compact representations for sparse data, we use embedding. To help streamline the data to make it more useful to models, we use feature selection. Depending on the target, feature selection helps us keep the most relevant signals and remove irrelevant ones.

By combining our different datasets and investing in feature engineering and feature selection, we improve the quality of the data that can be fed to various types of machine learning models.

Models

Building a good machine learning model is like building a skyscraper: quality materials are important, but a great design is also essential. Like the materials in a skyscraper, good datasets and features are important to machine learning, but a great algorithm is essential to identify PHA behaviors effectively and efficiently.

We train models to identify PHAs that belong to a specific category, such as SMS-fraud or phishing. Such categories are quite broad and contain a large number of samples given the number of PHA families that fit the definition. Alternatively, we also have models focusing on a much smaller scale, such as a family, which is composed of a group of apps that are part of the same PHA campaign and that share similar source code and behaviors. On the one hand, having a single model to tackle an entire PHA category may be attractive in terms of simplicity but precision may be an issue as the model will have to generalize the behaviors of a large number of PHAs believed to have something in common. On the other hand, developing multiple PHA models may require additional engineering efforts, but may result in better precision at the cost of reduced scope.

We use a variety of modeling techniques to modify our machine learning approach, including supervised and unsupervised ones.

One supervised technique we use is logistic regression, which has been widely adopted in the industry. These models have a simple structure and can be trained quickly. Logistic regression models can be analyzed to understand the importance of the different PHA and app features they are built with, allowing us to improve our feature engineering process. After a few cycles of training, evaluation, and improvement, we can launch the best models in production and monitor their performance.

For more complex cases, we employ deep learning. Compared to logistic regression, deep learning is good at capturing complicated interactions between different features and extracting hidden patterns. The millions of apps in Google Play provide a rich dataset, which is advantageous to deep learning.

In addition to our targeted feature engineering efforts, we experiment with many aspects of deep neural networks. For example, a deep neural network can have multiple layers and each layer has several neurons to process signals. We can experiment with the number of layers and neurons per layer to change model behaviors.

We also adopt unsupervised machine learning methods. Many PHAs use similar abuse techniques and tricks, so they look almost identical to each other. An unsupervised approach helps define clusters of apps that look or behave similarly, which allows us to mitigate and identify PHAs more effectively. We can automate the process of categorizing that type of app if we are confident in the model or can request help from a human expert to validate what the model found.

PHAs are constantly evolving, so our models need constant updating and monitoring. In production, models are fed with data from recent apps, which help them stay relevant. However, new abuse techniques and behaviors need to be continuously detected and fed into our machine learning models to be able to catch new PHAs and stay on top of recent trends. This is a continuous cycle of model creation and updating that also requires tuning to ensure that the precision and coverage of the system as a whole matches our detection goals.

Looking forward

As part of Google's AI-first strategy, our work leverages many machine learning resources across the company, such as tools and infrastructures developed by Google Brain and Google Research. In 2017, our machine learning models successfully detected 60.3% of PHAs identified by Google Play Protect, covering over 2 billion Android devices. We continue to research and invest in machine learning to scale and simplify the detection of PHAs in the Android ecosystem.

Acknowledgements

This work was developed in joint collaboration with Google Play Protect, Safe Browsing and Play Abuse teams with contributions from Andrew Ahn, Hrishikesh Aradhye, Daniel Bali, Hongji Bao, Yajie Hu, Arthur Kaiser, Elena Kovakina, Salvador Mandujano, Melinda Miller, Rahul Mishra, Damien Octeau, Sebastian Porst, Chuangang Ren, Monirul Sharif, Sri Somanchi, Sai Deep Tetali, Zhikun Wang, and Mo Yu.

November 15th 2018, 2:26 pm

An Update on Project Treble

Android Developers Blog

Posted by Iliyan Malchev, Project Treble Architect

Last week at the 2018 Android Dev Summit, we demonstrated the benefits of Project Treble by showing the same Generic System Image (GSI) running on devices from different OEMs. We highlighted the availability of GSI for Android 9 Pie that app developers can use to develop and test their apps with Android 9 on any Treble-compliant device.

Launched with Android Oreo in 2017, Project Treble has enabled OEMs and silicon vendors to develop and deploy Android updates faster than what was previously possible. Since then, we've been working with device manufacturers to define Vendor Interfaces (VINTF) and draw a clear separation between vendor and framework code on Android devices.

Going forward, all devices launching with Android 9 Pie or later will be Treble-compliant and take full advantage of the Treble architecture to deliver faster upgrades. Thanks to Treble, we expect to see more devices from OEMs running Android 9 Pie at the end of 2018 as compared to the number of devices that were running Android Oreo at the end of 2017.

The GSI is built from the latest available AOSP source code, including the latest bug fixes contributed by OEMs. Device manufacturers already use GSI to validate the implementation of the vendor interface on their devices, and Android app developers can now harness the power of the GSI to test their apps across different devices. With GSI, you can test your apps on a pure AOSP version of the latest Android dessert, including the latest features and behavior changes, on any Treble-compliant device that's unlocked for flashing.

We're continuing to work on making GSI even more accessible and useful for app developers. For example, the GSI could enable early access to future Android platform builds that you can run on a Treble-compliant Android 9 device, so you could start app development and validation before the AOSP release.

If you are interested in trying GSI today, check out the documentation for full instructions on how to build GSI yourself and flash it to your Treble-compliant device.

November 14th 2018, 8:11 pm

Get your app ready for foldable phones

Android Developers Blog

Posted by Leo Sei, Product Manager on Android

As you may have heard from the Android Dev Summit, we announced that we're expanding support in Android to include Foldables, in preparation for upcoming devices from hardware partners like Samsung.

Here are a set of recommendations and information to make sure your application provides a great user experience on this new form factor (you can also check out the Android Dev Summit dedicated session here)

1. Screen continuity

On this new form factor, your application could be transitioned from one screen to another automatically (eg. when folding / unfolding a foldable phone).

During this transition, your app will receive a configuration change for the new layout (and possibly density in some cases).

To provide a great user experience when changing from one screen to the other, you want to make sure your app properly support runtime configuration change.

How to test: Emulators for various devices should become available soon (eg., Samsung will publish a folding / unfolding emulator apk later in Q4 which should work on Samsung Galaxy S4 tablets as well as the AOSP emulator in Android studio).

2. Multi-resume

Today, when an app is in multi-window but not focused, it is on the OnPause state.

While we provide recommendations on how to support multi-window, we noticed a significant number of apps are not handling the onPause state according to those recommendations (video paused or stopped, instant messages not displayed etc).

To help developers provide the best user experience on multi-window with minimal effort, we're allowing device manufacturers to keep all apps resumed when in multi-windows in P.

To opt-in to this behavior in Android P, add the following meta-data in your app manifest:

<meta-data android:name="android.allow_multiple_resumed_activities" android:value="true" />

Note: With the next Android version we're looking into how to optimize compatibility for this behavior.

How to test: There are no device at the moment with this behavior but device manufacturers are working to update existing devices to allow developers to test. Stay tuned for more details from device manufacturers.

3. Multi-display

Beginning with Android 8.0 (API level 26), the platform offers enhanced support for multiple displays. If an activity supports multi-window mode and is running on a device with multiple displays, users can move the activity from one display to another. When an app launches an activity, the app can specify which display the activity should run on. See here for the full documentation

How to test: You can try it out by using the "Developer options > Simulate secondary displays" option. Keep in mind that those simulated display do not process inputs.

November 9th 2018, 1:59 pm

Unfolding right now at #AndroidDevSummit!

Android Developers Blog

Posted by Stephanie Cuthbertson, Director of Product Management

Today, at the Computer History Museum in Mountain View, CA, we kicked off the Android Dev Summit, taking a look back at the last 10 years of Android and then jumping into some important new features for Android developers. Here's a look at some of the things we shared!

Unfolding Android into new experiences

As early as Android 1.6, Android and our partners have contemplated different screen sizes and densities, enabling the platform to power a broad category of form factors and new experiences like Android TV, Android Auto, Wear OS and even Android apps on Chromebooks. Phone screens are an area where Android partners set the bar, introducing "phablets" when phone screens were small. Fast forward to today, when a phablet is... just a phone, a standard size users have come to love.

Now we see a Android device makers creating a new category: Foldables. Taking advantage of new flexible display technology, the screen can literally bend and fold.

There are two variants broadly speaking: two-screen devices and one-screen devices. When folded, foldables look like phones, fitting in your pocket or purse. When unfolded, their defining feature is what we call screen continuity. For example, start a video with the folded smaller screen - and later you can sit down and unfold the device to get a larger tablet-sized screen for a beautiful, immersive experience. As you unfold, the app seamlessly transfers to the bigger screen without missing a beat. We're optimizing Android for this new form factor. And, making changes to help developers everywhere take advantage of the possibilities this creates for amazing new experiences, new ways to engage and delight your users. Tune in to the Foldables session at Dev Summit this week to learn more. Expect to see Foldables coming from several Android manufacturers, including one Samsung previewed today and plans to offer next year.

Kotlin: updates to the fastest growing language

We made Kotlin a first class language on Android in 2017. This month we had over 118,000 new projects using Kotlin started in Android Studio - from those users who opt in to share metrics. That's a 10X increase from last year. It's become the fastest growing language in terms of growth of number of contributors on GitHub, and voted the #2 most loved language on Stack Overflow. In our surveys, the more developers use Kotlin, the higher their satisfaction.

Last week, JetBrains released the latest version of Kotlin, 1.3, which brings new language features, APIs, bug fixes, and performance improvements:

All of these new features of Kotlin 1.3 will be integrated into the Kotlin-specific APIs that we provide–a majority of which are through KTX extensions as part of Jetpack.

Android Jetpack: Navigation, Work Manager, and Slices

At Google I/O we announced Jetpack, the next generation of tools and Android APIs to accelerate Android application development. Jetpack builds on the foundations laid out by Support Library and Architecture. Already, 80% of top 1,000 apps and games are using one of the new Jetpack libraries in production.

This summer we moved AndroidX - Jetpack's evolution of the original Android Support Library - to public AOSP. This means you can see features and bug fixes implemented in real-time, and contribute to any of the AndroidX libraries. You can learn more about contributing here.

We've been working to get as much feedback and refinement as possible on two new Architecture Component libraries: Navigation and Work Manager, and we plan to move both to Beta this month. The Navigation Architecture Component offers a simplified way to implement Android's navigation principles in your application, using a single Activity. Plus, the new Navigation Editor in Android Studio creates and edits your navigation architecture. This eliminates navigation boilerplate, gives you atomic navigation operations, easier animated transitions and more. WorkManager makes it easy to perform background tasks in the most efficient manner, choosing the most appropriate solution based on the application state and device API level.

Navigation Editor

We're also excited to see Android Slices move to public Search experiments! At I/O this year we introduced Slices, a new way to bring users to your app. Slices are like a mini snippet of your app, where you can surface content and actions. You can book a flight, play a video, or call a ride. Slices is another example where we want to be open very early, but we want to take the time to get it right. We're moving into public EAP this month with Doist, Kayak and others. We'll run experiments surfacing Slices in Google search results. To learn more, there's also a session today at Dev Summit with more info and best practices.

Android Studio: focusing on productivity, build speed, quality and fundamentals

Android Studio is our official IDE for Android development. We asked where do you spend the most time? When we gather data from Android Studio's opted-in users we see that build time are getting faster with every release, sometimes as fast as 20%, but we also see build time getting slower and slower over time. So, how can both things be true? We've been digging in hard to understand.

It turns out build is a pretty complicated ecosystem. Developer choices makes a huge difference. Our developers are using a very broad (and growing) combination of OSes, custom plug-ins, annotation processors, languages. All of these can significantly affect times. In one case, a plugin some users like to add was silently slowing build speeds by up to 45%. Learning this, we realized we need build profiling and analysis tools so you can easily understand what's slowing your build down. We're also investing more in our own plugins to accelerate performance to make sure we continue to improve the performance of core build.

Android Studio 3.3 launches beta 3 today. In coming releases expect to see a strong focus on quality and fundamentals: reducing the number of crashes and hangs, optimizing memory usage, and fixing user-impacting bugs. We also announced today that we're making Android Studio an officially supported IDE on Chrome OS early next year; learn more here.

Android App Bundles and dynamic features

App sizes have grown dramatically, up 5x since 2012. But larger apps have downsides: lower install conversion rates, lower update rates, and higher uninstalls. This is why we built the Android App Bundle, the new publishing format that serves only the code and resources a user needs to run your app on their specific device; on average apps see 35% size savings compared to a universal APK. The app bundle also saves you time and effort with each release since you don't need to use incomplete solutions like multi-APK. Android Studio 3.2 brought full IDE support of app bundles, and there are now thousands of app bundles in production totaling billions of installs, including Google's apps like YouTube, Google Maps, Google Photos, and Google News.

The app bundle now supports uncompressed native libraries; with no additional developer work needed, the app bundle now makes apps using native libraries an average of 8% smaller to download and 16% smaller on disk on M+ devices.

Once you switch to the app bundle you can also start modularizing your app. With dynamic feature modules, you can load any app functionality on demand instead of at install time. You don't need to keep big features that are only used once, on every single device forever; dynamic features can be installed and uninstalled dynamically when your app requests them.

In-app Updates API

We've heard that you'd like more controls to ensure that users are running the latest and greatest version of your app. To address this, we're launching an In-app Updates API. We're testing the API with early access partners and will be launching it to all developers soon.

You'll have two options with this API; the first is a full-screen experience for critical updates when you expect the user to wait for the update to be applied immediately. The second option is a flexible update, which means the user can keep using the app while the update is downloaded. You can completely customize the update flow so it feels like part of your app.

Instant discovery

We're also making instant apps easier than ever to adopt. We recently made using web URLs optional, enabling you to take your existing play store deep link traffic and send users to your instant experience if it's available. Additionally, we've raised the instant app size limit to 10MB for the Try Now button on the Play Store and web banners to make it even easier to adopt.

In the Android Studio 3.3 beta, you can now build an instant-enabled app bundle. This means that you can now build and deploy both your Instant and installed experiences from a single Android Studio project, and include them in a single Android App Bundle. You only have to upload just ONE artifact for both instant and installed app.

As developers, your feedback has been critical in shaping these investment areas; you are part of how we work, from early ideas, to EAPs and canaries, Beta, and iterating after launch. We hope you join us for the next two days whether you're watching the 30+ sessions on the livestream, joining social, or with us in-person in Mountain View. From the team, a sincere thank you for all your thoughtful feedback and contributions. We hope you enjoy Android Dev Summit.

November 7th 2018, 2:14 pm

R8, the new code shrinker from Google, is available in Android studio 3.3 beta

Android Developers Blog

Posted by Leo Sei, Product Manager on Android Studio and R8

Android developers know that APK size is an important factor in user engagement. Code shrinking helps reduce the size of your APK by getting rid of unused code and resources as well as making your actual code take less space (also known as minification or obfuscation).

That's why we're investing in improving code shrinking to make it faster and more efficient. We're excited to announce that the next generation code shrinker, R8, is available for preview as part of Android Studio 3.3 beta.

R8 does all of shrinking, desugaring and dexing in one step. When comparing to the current code shrinking solution, Proguard, R8 shrinks the code faster while improving the output size.

The following data comes from benchmark on the Santa Tracker app, you can find the project with benchmark details on this GitHub repository.

How to try it

R8 is available with Android Studio 3.3 beta and works with Proguard rules. To try it, set the following in your project's gradle.properties file:

android.enableR8=true

For the more adventurous, R8 also has full mode that is not directly compatible with Proguard. In order to try that out you can additionally set the following in your gradle.properties file:

android.enableR8.fullMode=true

This turns on more optimizations, that can further reduce app size. However, you might need a few extra keep rules to make it work.

We have tested R8 correctness and performances on a number of apps and the results are promising so we plan to switch R8 as the default shrinker in Android studio soon.

Please give R8 a try and we would love to hear your feedback. You can file a bug report using this link.

November 5th 2018, 2:08 pm

The Android Dev Summit app is live! Get ready for November 7-8

Android Developers Blog

Posted by Matt Pearring, Associate Product Marketing Manager, Developer Marketing

In just a week, we'll be kicking off Android Dev Summit 2018, broadcasting live from the Computer History Museum in Mountain View, CA on November 7 and 8. We'll have two days of deep technical sessions from the Android engineering team, with over 30 sessions livestreamed. The app just went live; download it on Google Play and start planning.

With the app you can explore the conference schedule with details on keynotes, sessions, and lightning talks. You can also plan your summit experience by saving events to your personalized schedule. This year's app is also an Instant app, so you can try it out first before installing it!

[Android Dev Summit app screenshots]

If you can't join in person, you can always join us online — we'll be livestreaming all of the sessions on the Android Dev Summit website or app and making them available on YouTube throughout the conference so you can watch at your own pace. Plus, we will share updates directly from the Computer History Museum to our social channels, so be sure to follow along!

October 31st 2018, 5:44 pm

Discontinuing support for Android Nearby Notifications

Android Developers Blog

Posted by Ritesh Nayak M, Product Manager

Three years ago, we created Nearby Notifications as a way for Android users to discover apps and content based on what is nearby. Our goal was to bring relevant and engaging content to users - to provide useful information proactively. Developers have leveraged this technology to let users know about free wifi nearby, provide guides while in a museum, and list transit schedules at bus stops.

We've learned a lot building and launching Nearby Notifications. However, earlier this year, we noticed a significant increase in locally irrelevant and spammy notifications that were leading to a poor user experience. While filtering and tuning can help, in the end, we have a very high bar for the quality of content that we deliver to users, especially content that is delivered through notifications. Ultimately, we have determined these notifications did not meet that bar. As a result, we have decided to discontinue support for Nearby Notifications. We will stop serving Nearby Notifications on December 6th, 2018.

What does it mean for Android users

Android users will stop receiving Nearby Notifications.

What does it mean for developers

On December 6th we will stop delivering both Eddystone and Physical Web beacon notifications. You will still continue to have access to the beacon dashboard and can deliver proximity based experiences similar to Nearby Notifications via your own apps using our Proximity Beacons API.

We have two related APIs, Nearby Messages and Connections, that are available for developers to build device-to-device connectivity experiences, and also have Fast Pair, for device discovery and pairing. We will continue to invest in these APIs and support products using these technologies.

We sincerely appreciate the efforts of the Android developer community in supporting and evolving Nearby technology and the feedback that has helped us improve. We look forward to continuing to deliver engaging proximity experiences to users and seeing what developers create within their apps with our APIs.

October 25th 2018, 12:37 pm

Free training for Android developers - learn how to succeed on Google Play

Android Developers Blog

Dan Lavelle, Head of Learning Operations, Google Play

Having a great idea for an app or game is just the beginning. At Google Play, it's our goal to provide you with the tools and skills to build successful mobile app and games businesses. Training continues to be among the top requested features from Android developers, and we've heard your feedback.

That's why we're launching a brand new, free e-learning platform to help you realize the full potential of your business on Google Play.

Introducing Google Play's Academy for App Success

Whether you're looking to grow your audience, understand performance metrics, or increase revenue, Play Academy is here to help you understand the best practices and Play Console features to succeed on Google Play. We built Play Academy to fit into your busy schedule. Learn from your home or office computer, or take courses on-the-go with your mobile device.

Key features of Play Academy

Learning paths

Choose from 10 collections of bite-sized courses organized around features and best practices, including; Test your app before release, Evaluate your app's technical performance, and Monetize your app.

Interactive lessons

Learn through a rich multimedia and interactive e-learning experience.

Assessments

Test your new knowledge of key Play Console features and mobile app best practices.

Achievements

Get recognition for your new skills. Wear your achievement badges with pride on your Play Academy profile.

Start learning today

It's easy to get started with free e-learning content from Google Play. Head over to g.co/play/academy to sign up and start your developer journey. Also, make sure you keep an eye on upcoming Play Academy news - we'll regularly update our courses to keep pace with the newest features and programs so that you can stay up-to-date with the latest insights you'll need to grow your app or game business.

How useful did you find this blog post?

October 24th 2018, 1:11 pm

Google Play offline peer to peer installs beta

Android Developers Blog

Posted by James Bender, Product Manager, Google Play

In June we started adding security metadata to all apps and app updates to help verify product authenticity from Google Play. We're doing this is to help developers reach a wider audience, particularly in countries where peer-to-peer app sharing is common because of costly data plans and limited connectivity.

Now, when a user shares an app via Play-approved partner peer-to-peer apps, Play will be able to determine shared app authenticity while a device is offline, add those shared apps to a user's Play Library, and manage app updates when the device comes back online. This will give users more confidence when using Play-approved peer-to-peer app beta partners, starting today with SHAREIt. Additional integrations from Files Go by Google and Xender are planned in the coming weeks. Please visit the Play Store to make sure you have the latest versions of these apps.

This also benefits you as a developer as it provides a Play-authorized offline distribution channel and, since the peer-to-peer shared app is added to your user's Play library, your app will now be eligible for app updates from Play.

No action is needed by developers or your users. This is an important step that improves the integrity of Google Play's mobile app ecosystem. Offline Play peer-to-peer sharing presents a new distribution opportunity for developers while helping more people keep their apps up to date.

How useful did you find this blog post?

October 19th 2018, 4:04 pm

Android Protected Confirmation: Taking transaction security to the next level

Android Developers Blog

Posted by Janis Danisevskis, Information Security Engineer, Android Security

In Android Pie, we introduced Android Protected Confirmation, the first major mobile OS API that leverages a hardware protected user interface (Trusted UI) to perform critical transactions completely outside the main mobile operating system. This Trusted UI protects the choices you make from fraudulent apps or a compromised operating system. When an app invokes Protected Confirmation, control is passed to the Trusted UI, where transaction data is displayed and user confirmation of that data's correctness is obtained.

Once confirmed, your intention is cryptographically authenticated and unforgeable when conveyed to the relying party, for example, your bank. Protected Confirmation increases the bank's confidence that it acts on your behalf, providing a higher level of protection for the transaction.

Protected Confirmation also adds additional security relative to other forms of secondary authentication, such as a One Time Password or Transaction Authentication Number. These mechanisms can be frustrating for mobile users and also fail to protect against a compromised device that can corrupt transaction data or intercept one-time confirmation text messages.

Once the user approves a transaction, Protected Confirmation digitally signs the confirmation message. Because the signing key never leaves the Trusted UI's hardware sandbox, neither app malware nor a compromised operating system can fool the user into authorizing anything. Protected Confirmation signing keys are created using Android's standard AndroidKeyStore API. Before it can start using Android Protected Confirmation for end-to-end secure transactions, the app must enroll the public KeyStore key and its Keystore Attestation certificate with the remote relying party. The attestation certificate certifies that the key can only be used to sign Protected Confirmations.

There are many possible use cases for Android Protected Confirmation. At Google I/O 2018, the What's new in Android security session showcased partners planning to leverage Android Protected Confirmation in a variety of ways, including Royal Bank of Canada person to person money transfers; Duo Security, Nok Nok Labs, and ProxToMe for user authentication; and Insulet Corporation and Bigfoot Biomedical, for medical device control.

Insulet, a global leading manufacturer of tubeless patch insulin pumps, has demonstrated how they can modify their FDA cleared Omnipod DASH TM Insulin management system in a test environment to leverage Protected Confirmation to confirm the amount of insulin to be injected. This technology holds the promise for improved quality of life and reduced cost by enabling a person with diabetes to leverage their convenient, familiar, and secure smartphone for control rather than having to rely on a secondary, obtrusive, and expensive remote control device. (Note: The Omnipod DASH™ System is not cleared for use with Pixel 3 mobile device or Protected Confirmation).

This work is fulfilling an important need in the industry. Since smartphones do not fit the mold of an FDA approved medical device, we've been working with FDA as part of DTMoSt, an industry-wide consortium, to define a standard for phones to safely control medical devices, such as insulin pumps. A technology like Protected Confirmation plays an important role in gaining higher assurance of user intent and medical safety.

To integrate Protected Confirmation into your app, check out the Android Protected Confirmation training article. Android Protected Confirmation is an optional feature in Android Pie. Because it has low-level hardware dependencies, Protected Confirmation may not be supported by all devices running Android Pie. Google Pixel 3 and 3XL devices are the first to support Protected Confirmation, and we are working closely with other manufacturers to adopt this market-leading security innovation on more devices.

October 19th 2018, 12:18 pm

Playtime 2018: Helping you build better apps in a smaller bundle

Android Developers Blog

Posted by Matt Henderson, Product Manager, Google Play

Today we are kicking off Playtime, our annual global event series, hosting over 800 attendees in Berlin and San Francisco to share insights from experts around the world and the latest updates on our products. This will be followed by events in Sao Paulo, Singapore, Taipei, Seoul, and Tokyo.

At Google Play, we continue to invest in tools that make it easier for you to develop and distribute your apps to a global audience. Below are some of the exciting updates we are announcing today:

Building smaller apps

The Android App Bundle is Android's new publishing format, with which you can more easily deliver a great experience in a smaller app size. Smaller apps have higher conversion rates and our user research shows that app size is a leading motivator in driving uninstalls. With the Android App Bundle's modularization, you can also deliver features on demand, instead of at install time, further reducing the size of your app.

Thousands of app bundles are already in production, with an average size reduction of 35%. Today, we are announcing updates that offer additional reasons for you to switch to the bundle.

To learn more about the Android App Bundle, dynamic features, and all the benefits you receive from building a smaller, modular app, read our Medium post.

Building a unified instant experience

We've been listening to your feedback to make it easier to build instant apps, and we recently increased the size limit to 10MB to enable TRY NOW on the Play Store and removed the URL requirement. For game developers, we've partnered with Unity on a Google Play Instant plug-in and have built instant directly into the new Cocos Creator.

We’re now using the Android App Bundle to solve one of the primary pain points of building instant apps. Previously, you needed to publish both an instant app and an installable app. With Android Studio 3.2, you could publish instant-enabled bundles but you were still required to publish a primary app bundle.

Now, you don't have to maintain separate code. With the Android Studio 3.3 beta release, a developer can publish a single app bundle and classify it or a particular module to be instant enabled. The unified app bundle is the future of instant app experiences and we hope you will try it out.

Extending instant trials

Google Play Instant is now available for premium titles and pre-registration campaigns, so people can try your game before it launches and generate additional buzz. New apps and games join Google Play Instant every day, and we're excited to welcome Umiro, by Devolver Digital, and Looney Tunes World of Mayhem, by Scopely, as some of the first to take advantage of these new features.

Reducing crash rates and improving quality

The Play Console offers two tools to help you monitor performance and improve the quality of your apps. The pre-launch report runs your apps on real devices situated in the Firebase Test Lab and generates useful metadata to help you identify and fix issues before pushing your apps to production. Android vitals helps you track the performance and quality of your app on users' devices in the real world.

Now, we're linking them together to provide more actionable insights. Whenever a real-world crash in Android vitals is also seen during a pre-launch report execution, you'll get all the extra metadata from the pre-launch report available to you in the Android vitals dashboard so you can debug more effectively. This is also linked in both directions, so that if a crash occurs in pre-launch reports that is already happening in the real world, you'll be able to see the current impact in Android vitals which will help you better prioritize the issues highlighted by pre-launch reports.

Optimizing your app and business

We've made several updates to make it easier to manage your app and business with Play.

A new way to learn about Play

We're equally excited to launch the Academy for App Success with new interactive courses to help developers get the most out of the Play Console, understand Play policies, and utilize best practices to improve quality and increase business performance. This free new program allows you to track your learning progress with quizzes and achievements to demonstrate your expertise. Available in English today, new content and translated courses will be added soon.

We continue to be inspired by what you build and the impact you have on people around the world. Check our #IMakeApps collection which celebrate some amazing people who create apps and games and share your #IMakeApps story.

How useful did you find this blog post?

October 18th 2018, 1:20 pm

Building a Titan: Better security through a tiny chip

Android Developers Blog

Posted by Nagendra Modadugu and Bill Richardson, Google Device Security Group

At the Made by Google event last week, we talked about the combination of AI + Software + Hardware to help organize your information. To better protect that information at a hardware level, our new Pixel 3 and Pixel 3 XL devices include a Titan M chip.We briefly introduced Titan M and some of its benefits on our Keyword Blog, and with this post we dive into some of its technical details.

Titan M is a second-generation, low-power security module designed and manufactured by Google, and is a part of the Titan family. As described in the Keyword Blog post, Titan M performs several security sensitive functions, including:

Including Titan M in Pixel 3 devices substantially reduces the attack surface. Because Titan M is a separate chip, the physical isolation mitigates against entire classes of hardware-level exploits such as Rowhammer, Spectre, and Meltdown. Titan M's processor, caches, memory, and persistent storage are not shared with the rest of the phone's system, so side channel attacks like these—which rely on subtle, unplanned interactions between internal circuits of a single component—are nearly impossible. In addition to its physical isolation, the Titan M chip contains many defenses to protect against external attacks.

But Titan M is not just a hardened security microcontroller, but rather a full-lifecycle approach to security with Pixel devices in mind. Titan M's security takes into consideration all the features visible to Android down to the lowest level physical and electrical circuit design and extends beyond each physical device to our supply chain and manufacturing processes. At the physical level, we incorporated essential features optimized for the mobile experience: low power usage, low-latency, hardware crypto acceleration, tamper detection, and secure, timely firmware updates. We improved and invested in the supply chain for Titan M by creating a custom provisioning process, which provides us with transparency and control starting from the earliest silicon stages.

Finally, in the interest of transparency, the Titan M firmware source code will be publicly available soon. While Google holds the root keys necessary to sign Titan M firmware, it will be possible to reproduce binary builds based on the public source for the purpose of binary transparency.

A closer look at Titan M

Titan (left) and Titan M (right)

Titan M's CPU is an ARM Cortex-M3 microprocessor specially hardened against side-channel attacks and augmented with defensive features to detect and respond to abnormal conditions. The Titan M CPU core also exposes several control registers, which can be used to taper access to chip configuration settings and peripherals. Once powered on, Titan M verifies the signature of its flash-based firmware using a public key built into the chip's silicon. If the signature is valid, the flash is locked so it can't be modified, and then the firmware begins executing.

Titan M also features several hardware accelerators: AES, SHA, and a programmable big number coprocessor for public key algorithms. These accelerators are flexible and can either be initialized with keys provided by firmware or with chip-specific and hardware-bound keys generated by the Key Manager module. Chip-specific keys are generated internally based on entropy derived from the True Random Number Generator (TRNG), and thus such keys are never externally available outside the chip over its entire lifetime.

While implementing Titan M firmware, we had to take many system constraints into consideration. For example, packing as many security features into Titan M's 64 Kbytes of RAM required all firmware to execute exclusively off the stack. And to reduce flash-wear, RAM contents can be preserved even during low-power mode when most hardware modules are turned off.

The diagram below provides a high-level view of the chip components described here.

Better security through transparency and innovation

At the heart of our implementation of Titan M are two broader trends: transparency and building a platform for future innovation.

Transparency around every step of the design process — from logic gates to boot code to the applications — gives us confidence in the defenses we're providing for our users. We know what's inside, how it got there, how it works, and who can make changes.

Custom hardware allows us to provide new features, capabilities, and performance not readily available in off-the-shelf components. These changes allow higher assurance use cases like two-factor authentication, medical device control, P2P payments, and others that we will help develop down the road.

As more of our lives are bound up in our phones, keeping those phones secure and trustworthy is increasingly important. Google takes that responsibility seriously. Titan M is just the latest step in our continuing efforts to improve the privacy and security of all our users.

October 17th 2018, 2:05 pm

Modern background execution in Android

Android Developers Blog

Posted by Luiz Gustavo Martins, Partner Developer Advocate, Partner DevRel

This is the third in a series of blog posts in which outline strategies and guidance in Android with regard to power.

Over the years, executing background tasks on Android has evolved. To write modern apps, it's important to learn how to run your background tasks in modern fashion.

When is an app in the background?

Before understanding what background execution is, we need to have a clear view of when Android understands an app to be in the foreground. An app is considered to be in the foreground if any of the following is true:

If none of those conditions is true, the app is considered to be in the background.

Background execution changes

Running tasks in the background consumes a device's limited resources, like RAM and battery. This might result in a bad user experience. For example, background tasks may degrade the battery life of the device or the user may experience poor device performance at times such as watching a video, playing a game, using the camera.

To improve battery life and give a better user experience, Android has evolved over several releases to establish limits on background execution. These limits include:

Use cases and solutions

Deciding which tools to use to implement background execution requires the developer to have a clear understanding of what they want to accomplish, and under which restrictions. This flowchart can help you make a decision:

In Summary:

Use Case Examples Solution
Guaranteed execution of deferrable work
  • Upload logs to your server
  • Encrypt/Decrypt content to upload/download
WorkManager
A task initiated in response to an external event
  • Syncing new online content like email
FCM + WorkManager
Continue user-initiated work that needs to run immediately even if the user leaves the app
  • Music player
  • Tracking activity
  • Transit navigation
Foreground Service
Trigger actions that involve user interactions, like notifications at an exact time.
  • Alarm clock
  • Medicine reminder
  • Notification about a TV show that is about to start
AlarmManager

Use background execution judiciously so that you can build cool apps that delight users while saving their battery. If you need more information on executing background tasks on Android, there's great content at the Android developer site.

Note: WorkManager is still in public preview. If you need an alternative solution right now, you should use JobScheduler, although it has limitations that don't apply to WorkManager. JobScheduler is part of the Android Framework, and only available for Android API 21 and above; WorkManager works on API 14 and above.

Acknowledgements: This series of blog posts is produced in collaboration between the Android Framework and DevRel teams

October 16th 2018, 1:03 pm

Get ready for #AndroidDevSummit, kicking off November 7!

Android Developers Blog

In less than a month, we'll be kicking off Android Dev Summit 2018, broadcasting live from the Computer History Museum in Mountain View, CA on November 7 and 8. We'll have two days of deep technical sessions from the Android engineering team, with over 30 sessions livestreamed. The first wave of sessions were just posted to the website: check them out and start planning.

The summit kicks off on November 7 at 10AM PST with the keynote, where you'll hear directly from Dave Burke and others on the present and future of Android development. From there, we'll dive into two tracks (and two days!) of deep technical content from the Google engineering team, on topics such as Android Pie, Android Studio, Kotlin, Android Jetpack, Google Play and more. We'll also have demos and office hours for those attending in person; more on that in the coming weeks!

We received a ton of interest from developers looking to attend in person; if you were one of those who expressed interest but didn't receive a ticket, we've already reached out to you and shared this news, but we want to apologize again that we weren't able to find you a spot. Rest assured, though, that we're still doing all that we can to free up more tickets, and we'll be reaching out to folks we're able to accommodate in the lead-up to the show. And if you did receive a ticket but your plans have changed and you're no longer able to attend, please let us know by sending an email to android-dev-summit@google.com, and we'll free up your spot for others on the waitlist.

If you can't join in person, you can always join us online: we'll be livestreaming all of the sessions on the Android Dev Summit website and making them available on YouTube throughout the conference to watch at your own pace. Plus, we'll be sharing updates directly from the Computer History Museum to our social channels, so be sure to follow along!

October 12th 2018, 4:17 pm

Introducing Oboe: A C++ library for low latency audio

Android Developers Blog

Posted by Don Turner, Developer Advocate, Android Audio Framework

This week we released the first production-ready version of Oboe - a C++ library for building real-time audio apps. Oboe provides the lowest possible audio latency across the widest range of Android devices, as well as several other benefits.

Single API

Oboe takes advantage of the improved performance and features of AAudio on Oreo MR1 (API 27+) whilst maintaining backward compatibility (using OpenSL ES) on API 16+. It's kind of like AndroidX for native audio.

Less code to write and maintain

Using Oboe you can create an audio stream in just 3 lines of code (vs 50+ lines in OpenSL ES):

AudioStreamBuilder builder;
AudioStream *stream = nullptr;
Result result = builder.openStream(&stream);

Other benefits

Getting started

Take a look at the short video introduction:

Check out the documentation, code samples and API reference. There's even a codelab which shows you how to build a rhythm-based game.

If you have any issues, please file them here, we'd love to hear how you get on.

October 11th 2018, 5:53 pm

Control Flow Integrity in the Android kernel

Android Developers Blog

Posted by Sami Tolvanen, Staff Software Engineer, Android Security

Android's security model is enforced by the Linux kernel, which makes it a tempting target for attackers. We have put a lot of effort into hardening the kernel in previous Android releases and in Android 9, we continued this work by focusing on compiler-based security mitigations against code reuse attacks.

Google's Pixel 3 will be the first Android device to ship with LLVM's forward-edge Control Flow Integrity (CFI) enforcement in the kernel, and we have made CFI support available in Android kernel versions 4.9 and 4.14. This post describes how kernel CFI works and provides solutions to the most common issues developers might run into when enabling the feature.

Protecting against code reuse attacks

A common method of exploiting the kernel is using a bug to overwrite a function pointer stored in memory, such as a stored callback pointer or a return address that had been pushed to the stack. This allows an attacker to execute arbitrary parts of the kernel code to complete their exploit, even if they cannot inject executable code of their own. This method of gaining code execution is particularly popular with the kernel because of the huge number of function pointers it uses, and the existing memory protections that make code injection more challenging.

CFI attempts to mitigate these attacks by adding additional checks to confirm that the kernel's control flow stays within a precomputed graph. This doesn't prevent an attacker from changing a function pointer if a bug provides write access to one, but it significantly restricts the valid call targets, which makes exploiting such a bug more difficult in practice.

Figure 1. In an Android device kernel, LLVM's CFI limits 55% of indirect calls to at most 5 possible targets and 80% to at most 20 targets.

Gaining full program visibility with Link Time Optimization (LTO)

In order to determine all valid call targets for each indirect branch, the compiler needs to see all of the kernel code at once. Traditionally, compilers work on a single compilation unit (source file) at a time and leave merging the object files to the linker. LLVM's solution to CFI is to require the use of LTO, where the compiler produces LLVM-specific bitcode for all C compilation units, and an LTO-aware linker uses the LLVM back-end to combine the bitcode and compile it into native code.

Figure 2. A simplified overview of how LTO works in the kernel. All LLVM bitcode is combined, optimized, and generated into native code at link time.

Linux has used the GNU toolchain for assembling, compiling, and linking the kernel for decades. While we continue to use the GNU assembler for stand-alone assembly code, LTO requires us to switch to LLVM's integrated assembler for inline assembly, and either GNU gold or LLVM's own lld as the linker. Switching to a relatively untested toolchain on a huge software project will lead to compatibility issues, which we have addressed in our arm64 LTO patch sets for kernel versions 4.9 and 4.14.

In addition to making CFI possible, LTO also produces faster code due to global optimizations. However, additional optimizations often result in a larger binary size, which may be undesirable on devices with very limited resources. Disabling LTO-specific optimizations, such as global inlining and loop unrolling, can reduce binary size by sacrificing some of the performance gains. When using GNU gold, the aforementioned optimizations can be disabled with the following additions to LDFLAGS:

LDFLAGS += -plugin-opt=-inline-threshold=0 \
           -plugin-opt=-unroll-threshold=0

Note that flags to disable individual optimizations are not part of the stable LLVM interface and may change in future compiler versions.

Implementing CFI in the Linux kernel

LLVM's CFI implementation adds a check before each indirect branch to confirm that the target address points to a valid function with a correct signature. This prevents an indirect branch from jumping to an arbitrary code location and even limits the functions that can be called. As C compilers do not enforce similar restrictions on indirect branches, there were several CFI violations due to function type declaration mismatches even in the core kernel that we have addressed in our CFI patch sets for kernels 4.9 and 4.14.

Kernel modules add another complication to CFI, as they are loaded at runtime and can be compiled independently from the rest of the kernel. In order to support loadable modules, we have implemented LLVM's cross-DSO CFI support in the kernel, including a CFI shadow that speeds up cross-module look-ups. When compiled with cross-DSO support, each kernel module contains information about valid local branch targets, and the kernel looks up information from the correct module based on the target address and the modules' memory layout.

Figure 3. An example of a cross-DSO CFI check injected into an arm64 kernel. Type information is passed in X0 and the target address to validate in X1.

CFI checks naturally add some overhead to indirect branches, but due to more aggressive optimizations, our tests show that the impact is minimal, and overall system performance even improved 1-2% in many cases.

Enabling kernel CFI for an Android device

CFI for arm64 requires clang version >= 5.0 and binutils >= 2.27. The kernel build system also assumes that the LLVMgold.so plug-in is available in LD_LIBRARY_PATH. Pre-built toolchain binaries for clang and binutils are available in AOSP, but upstream binaries can also be used.

The following kernel configuration options are needed to enable kernel CFI:

CONFIG_LTO_CLANG=y
CONFIG_CFI_CLANG=y

Using CONFIG_CFI_PERMISSIVE=y may also prove helpful when debugging a CFI violation or during device bring-up. This option turns a violation into a warning instead of a kernel panic.

As mentioned in the previous section, the most common issue we ran into when enabling CFI on Pixel 3 were benign violations caused by function pointer type mismatches. When the kernel runs into such a violation, it prints out a runtime warning that contains the call stack at the time of the failure, and the call target that failed the CFI check. Changing the code to use a correct function pointer type fixes the issue. While we have fixed all known indirect branch type mismatches in the Android kernel, similar problems may be still found in device specific drivers, for example.

CFI failure (target: [<fffffff3e83d4d80>] my_target_function+0x0/0xd80):
------------[ cut here ]------------
kernel BUG at kernel/cfi.c:32!
Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
…
Call trace:
…
[<ffffff8752d00084>] handle_cfi_failure+0x20/0x28
[<ffffff8752d00268>] my_buggy_function+0x0/0x10
…

Figure 4. An example of a kernel panic caused by a CFI failure.

Another potential pitfall are address space conflicts, but this should be less common in driver code. LLVM's CFI checks only understand kernel virtual addresses and any code that runs at another exception level or makes an indirect call to a physical address will result in a CFI violation. These types of failures can be addressed by disabling CFI for a single function using the __nocfi attribute, or even disabling CFI for entire code files using the $(DISABLE_CFI) compiler flag in the Makefile.

static int __nocfi address_space_conflict()
{
      void (*fn)(void);
 …
/* branching to a physical address trips CFI w/o __nocfi */
 fn = (void *)__pa_symbol(function_name);
      cpu_install_idmap();
      fn();
      cpu_uninstall_idmap();
 …
}

Figure 5. An example of fixing a CFI failure caused by an address space conflict.

Finally, like many hardening features, CFI can also be tripped by memory corruption errors that might otherwise result in random kernel crashes at a later time. These may be more difficult to debug, but memory debugging tools such as KASAN can help here.

Conclusion

We have implemented support for LLVM's CFI in Android kernels 4.9 and 4.14. Google's Pixel 3 will be the first Android device to ship with these protections, and we have made the feature available to all device vendors through the Android common kernel. If you are shipping a new arm64 device running Android 9, we strongly recommend enabling kernel CFI to help protect against kernel vulnerabilities.

LLVM's CFI protects indirect branches against attackers who manage to gain access to a function pointer stored in kernel memory. This makes a common method of exploiting the kernel more difficult. Our future work involves also protecting function return addresses from similar attacks using LLVM's Shadow Call Stack, which will be available in an upcoming compiler release.

October 10th 2018, 12:12 pm

Providing a safe and secure experience for our users

Android Developers Blog

Posted by Paul Bankhead, Director, Product Management, Google Play

We focus relentlessly on security and privacy on the Google Play Store to ensure Android users have a positive experience discovering and installing apps and games they love. We regularly update our Google Play Developer policies and today have introduced stronger controls and new policies to keep user data safe. Here are a few updates:

Upgrading for security and performance

As previously announced, as of November 1, 2018, Google Play will require updates to existing apps to target API level 26 (Android 8.0) or higher (this is already required for all new apps). Our goal is to ensure all apps on Google Play are built using the latest APIs that are optimized for security and performance.

Protecting Users

Our Google Play Developer policies are designed to provide a safe and secure experience for our users while also giving developers the tools they need to succeed. For example, we have always required developers to limit permission requests to only what is needed for their app to function and to be clear with users about what data they access.

As part of today's Google Play Developer Policy update, we're announcing changes related to SMS and Call Log permissions. Some Android apps ask for permission to access a user's phone (including call logs) and SMS data. Going forward, Google Play will limit which apps are allowed to ask for these permissions. Only an app that has been selected as a user's default app for making calls or text messages will be able to make these requests.

Please visit our Google Play Developer Policy Center and this Help Center article for detailed information on product alternatives to SMS and call logs permissions. For example, the SMS Retriever API enables you to perform SMS-based user verification and SMS Intent enables you to initiate an SMS or MMS text message to share content or invitations. We'll be working with our developer partners to give them appropriate time to adjust and update their apps, and will begin enforcement 90 days from this policy update.

In the coming months, we'll be rolling out additional controls and policies across our various products and platforms, and will continue to work with you, our developers, to help with the transition.

The trust of our users is critical and together we'll continue to build a safe and secure Android ecosystem.

October 8th 2018, 1:38 pm

Kotlin Momentum for Android and Beyond

Android Developers Blog

Posted by James Lau (@jmslau), Product Manager

Today marks the beginning of KotlinConf 2018 - the largest in-person gathering of the Kotlin community annually. 2018 has been a big year for Kotlin, as the language continues to gain adoption and earn the love of developers. In fact, 27% of the top 1000 Android apps on Google Play already use Kotlin. More importantly, Android developers are loving the language with over 97% satisfaction in our most recent survey. It's no surprise that Kotlin was voted as the #2 most-loved language in the 2018 StackOverflow survey.

Google supports Kotlin as a first-class programming language for Android development. In the past 12 months, we have delivered a number of important improvements to the Kotlin developer experience. This includes the Kotlin-friendly SDK, Android KTX, new Lint checks and various Kotlin support improvements in Android Studio. We have also launched Kotlin support in our official documentation, new flagship samples in Kotlin, a new Kotlin Bootcamp Udacity course, #31DaysOfKotlin and other deep dive content. We are committed to continuing to improve the Kotlin developer experience.

As the language continues to advance, more developers are discovering the benefits of Kotlin across the globe. Recently, we traveled to India and worked with local developers like Zomato to better understand how adopting Kotlin has benefited their Android development. Zomato is a leading restaurant search & discovery service that operates in 24 countries, with over 150 million monthly users. Kotlin helped Zomato reduce the number of lines of code in their app significantly, and it has also helped them find important defects in their app at compile time. You can watch their Kotlin adoption story in the video below.

Android Developer Story: Zomato uses Kotlin to write safer, more concise code.

Going beyond Android, we are happy to announce that the Google Cloud Platform team is launching a dedicated Kotlin portal today. This will help developers more easily find resources related to Kotlin on Google Cloud. We want to make it as easy as possible for you to use Kotlin, whether it's on mobile or in the Cloud.

Google Cloud Platform's Kotlin Homepage

Adopting a new language is a major decision for most companies, and you need to be confident that the language you choose will have a bright future. That's why Google has joined forces with JetBrains and established the Kotlin Foundation. The Foundation will ensure that Kotlin continues to advance rapidly, remain free and stay open. You can learn more about the Kotlin Foundation here.

It's an exciting time to be a Kotlin developer. If you haven't tried Kotlin yet, we encourage you to join this growing global community. You can get started by visiting kotlinlang.org or the Android Developer Kotlin page.

October 4th 2018, 1:33 pm

Start a new .page today

Android Developers Blog

Posted by Ben Fried, VP, CIO, & Chief Domains Enthusiast

Today we're announcing .page, the newest top-level domain (TLD) from Google Registry.

A TLD is the last part of a domain name, like .com in "google.com" or .google in "blog.google". The .page TLD is a new opportunity for anyone to build an online presence. Whether you're writing a blog, getting your business online, or promoting your latest project, .page makes it simple and more secure to get the word out about the unique things you do.

Check out 10 interesting things some people and businesses are already doing on .page:

  1. Ellen.Page is the website of Academy Award®-nominated actress and producer Ellen Page that will spotlight LGBTQ culture and social causes.
  2. Home.Page is a project by the digital media artist Aaron Koblin, who is creating a living collection of hand-drawn houses from people across the world. Enjoy free art daily and help bring real people home by supporting revolving bail.
  3. ChristophNiemann.Page is the virtual exhibition space of illustrator, graphic designer, and author Christoph Niemann.
  4. Web.Page is a collaboration between a group of designers and developers who will offer a monthly online magazine with design techniques, strategies, and inspiration.
  5. CareersXO.Page by Geek Girl Careers is a personality assessment designed to help women find tech careers they love.
  6. TurnThe.Page by Insurance Lounge offers advice about the transition from career to retirement.
  7. WordAsImage.Page is a project by designer Ji Lee that explores the visualizations of words through typography.
  8. Membrane.Page by Synder Filtration is an educational website about spiral-wound nanofiltration, ultrafiltration, and microfiltration membrane elements and systems.
  9. TV.Page is a SaaS company that provides shoppable video technology for e-commerce, social media, and retail stores.
  10. Navlekha.Page was created by Navlekhā, a Google initiative that helps Indian publishers get their content online with free authoring tools, guidance, and a .page domain for the first 3 years. Since the initiative debuted at Google for India, publishers are creating articles within minutes. And Navlekhā plans to bring 135,000 publishers online over the next 5 years.

Security is a top priority for Google Registry's domains. To help keep your information safe, all .page websites require an SSL certificate, which helps keep connections to your domain secure and helps protect against things like ad malware and tracking injections. Both .page and .app, which we launched in May, will help move the web to an HTTPS-everywhere future.

.page domains are available now through the Early Access Program. For an extra fee, you'll have the chance to get the perfect .page domain name from participating registrar partners before standard registrations become available on October 9th. For more details about registering your domain, check out get.page. We're looking forward to seeing what you'll build on .page!

October 2nd 2018, 4:30 pm

Android Studio 3.2

Android Developers Blog

Posted by Jamal Eason, Product Manager, Android

Today, Android Studio 3.2 is available for download. Android Studio 3.2 is the best way for app developers to cut into the latest Android 9 Pie release and build the new Android App bundle. Since announcing this update of Android Studio at Google I/O '18, we have refined and polished 20+ new features and focused our efforts on improving the quality for this stable release of Android Studio 3.2.

Every developer should use Android Studio 3.2 to transition to using an Android App Bundle, the new app publishing format. With very minimal work, you can generate an app bundle with Android Studio. Once you upload your app bundle to Google Play you can distribute smaller, optimized apps to your users. Early adopters have already seen between 11% - 64% in app size savings with app bundles over the legacy APK app size.

Another feature you do not want to miss is the Energy Profiler. This new profiler gives you a set of tools that will help you diagnose and improve the energy impact of your app. Better device battery life is one of the top most user requests, and with the Energy Profiler in Android Studio 3.2, you can do your part in improving device battery life by making sure your app is using the right amount of energy at the right time.

Lastly, you should also check out the new Android Emulator Snapshots feature. By using this feature, you can quickly take a snapshot of the current state of your emulator which includes the current state of the screen, apps, and settings. You can resume or boot into your emulator snapshot in under 2 seconds. For any app developer looking for super- fast boot times, or seeking to run tests in a predictable Android environment, Android Emulator Snapshots is a game changing feature for app development

On top of these major features, there are 20 new features plus many under-the-hood quality refinements in Android Studio 3.2. By using Android Studio 3.2, you can also develop for the latest technologies ranging from Android Jetpack, to the latest in Google Artificial Intelligence (AI) APIs with Android Slices.

Thank you to those who gave your early feedback on both the canary and beta releases. Your feedback helped us improve the quality and features in Android Studio 3.2. If you are ready for the next stable release, and want to use a new set of productivity features, Android Studio 3.2 is ready to download for you to get started.

Below is a full list of new features in Android Studio 3.2, organized by key developer flows.

Develop

Slices Provider Template

Build

Build Android App Bundle

Test

Android Emulator Snapshots

Optimize

Energy Profiler

To recap, the latest canary of Android Studio 3.2 includes these new major features:

Develop

  • AndroidX Refactoring
  • Sample Data
  • Material Design Update
  • Android Slices
  • CMakeList editing
  • What's New Assistant
  • New Lint Checks
  • Intellij Platform Update
  • Kotlin Update

Build

  • Android App Bundle
  • D8 Desugaring
  • R8 Optimizer
Test
  • Android Emulator Snapshots
  • Screen Record in Android Emulator
  • Virtual Scene Android Emulator Camera
  • AMD Processor Support
  • Hyper-V Support
  • ADB Connection Assistant

Optimize

  • Energy Profiler
  • System Trace
  • Profiler Sessions
  • Automatic CPU Recording
  • JNI Reference Tracking

Check out the release notes for more details.

Getting Started

Download the latest version of Android Studio 3.2 from the download page. If you are using a previous canary release of Android Studio, make sure you update to Android Studio Canary 14 or higher. If you want to maintain a stable version of Android Studio, you can run the stable release version and canary release versions of Android Studio at the same time. Learn more.

To use the mentioned Android Emulator features make sure you are running at least Android Emulator v28.0.7+ downloaded via the Android Studio SDK Manager.

We appreciate any feedback on things you like, and issues or features you would like to see. Please note, to maintain high product quality, a couple features (e.g. Navigation Editor) you saw in earlier release channels are not enabled by default in the stable release channel. If you find a bug or issue, feel free to file an issue. Connect with us -- the Android Studio development team ‐ on our Google+ page or on Twitter.

September 24th 2018, 3:35 pm

Android and Google Play Security Rewards Programs surpass $3M in payouts

Android Developers Blog

Posted by Jason Woloz and Mayank Jain, Android Security & Privacy Team

Our Android and Play security reward programs help us work with top researchers from around the world to improve Android ecosystem security every day. Thank you to all the amazing researchers who submitted vulnerability reports.

Android Security Rewards

In the ASR program's third year, we received over 470 qualifying vulnerability reports from researchers and the average pay per researcher jumped by 23%. To date, the ASR program has rewarded researchers with over $3M, paying out roughly $1M per year.

Here are some of the highlights from the Android Security Rewards program's third year:

As part of our ongoing commitment to security we regularly update our programs and policies based on ecosystem feedback. We also updated our severity guidelines for evaluating the impact of reported security vulnerabilities against the Android platform.

Google Play Security Rewards

In October 2017, we rolled out the Google Play Security Reward Program to encourage security research into popular Android apps available on Google Play. So far, researchers have reported over 30 vulnerabilities through the program, earning a combined bounty amount of over $100K.

If undetected, these vulnerabilities could have potentially led to elevation of privilege, access to sensitive data and remote code execution on devices.

Keeping devices secure

In addition to rewarding for vulnerabilities, we continue to work with the broad and diverse Android ecosystem to protect users from issues reported through our program. We collaborate with manufacturers to ensure that these issues are fixed on their devices through monthly security updates. Over 250 device models have a majority of their deployed devices running a security update from the last 90 days. This table shows the models with a majority of deployed devices running a security update from the last three months:

Manufacturer Device
ANS L50
Asus ZenFone 5Z (ZS620KL/ZS621KL), ZenFone Max Plus M1 (ZB570TL), ZenFone 4 Pro (ZS551KL), ZenFone 5 (ZE620KL), ZenFone Max M1 (ZB555KL), ZenFone 4 (ZE554KL), ZenFone 4 Selfie Pro (ZD552KL), ZenFone 3 (ZE552KL), ZenFone 3 Zoom (ZE553KL), ZenFone 3 (ZE520KL), ZenFone 3 Deluxe (ZS570KL), ZenFone 4 Selfie (ZD553KL), ZenFone Live L1 (ZA550KL), ZenFone 5 Lite (ZC600KL), ZenFone 3s Max (ZC521TL)
BlackBerry BlackBerry MOTION, BlackBerry KEY2
Blu Grand XL LTE, Vivo ONE, R2_3G, Grand_M2, BLU STUDIO J8 LTE
bq Aquaris V Plus, Aquaris V, Aquaris U2 Lite, Aquaris U2, Aquaris X, Aquaris X2, Aquaris X Pro, Aquaris U Plus, Aquaris X5 Plus, Aquaris U lite, Aquaris U
Docomo F-04K, F-05J, F-03H
Essential Products PH-1
Fujitsu F-01K
General Mobile GM8, GM8 Go
Google Pixel 2 XL, Pixel 2, Pixel XL, Pixel
HTC U12+, HTC U11+
Huawei Honor Note10, nova 3, nova 3i, Huawei Nova 3I, 荣耀9i, 华为G9青春版, Honor Play, G9青春版, P20 Pro, Honor V9, huawei nova 2, P20 lite, Honor 10, Honor 8 Pro, Honor 6X, Honor 9, nova 3e, P20, PORSCHE DESIGN HUAWEI Mate RS, FRD-L02, HUAWEI Y9 2018, Huawei Nova 2, Honor View 10, HUAWEI P20 Lite, Mate 9 Pro, Nexus 6P, HUAWEI Y5 2018, Honor V10, Mate 10 Pro, Mate 9, Honor 9, Lite, 荣耀9青春版, nova 2i, HUAWEI nova 2 Plus, P10 lite, nova 青春版本, FIG-LX1, HUAWEI G Elite Plus, HUAWEI Y7 2018, Honor 7S, HUAWEI P smart, P10, Honor 7C, 荣耀8青春版, HUAWEI Y7 Prime 2018, P10 Plus, 荣耀畅玩7X, HUAWEI Y6 2018, Mate 10 lite, Honor 7A, P9 Plus, 华为畅享8, honor 6x, HUAWEI P9 lite mini, HUAWEI GR5 2017, Mate 10
Itel P13
Kyocera X3
Lanix Alpha_950, Ilium X520
Lava Z61, Z50
LGE LG Q7+, LG G7 ThinQ, LG Stylo 4, LG K30, V30+, LG V35 ThinQ, Stylo 2 V, LG K20 V, ZONE4, LG Q7, DM-01K, Nexus 5X, LG K9, LG K11
Motorola Moto Z Play Droid, moto g(6) plus, Moto Z Droid, Moto X (4), Moto G Plus (5th Gen), Moto Z (2) Force, Moto G (5S) Plus, Moto G (5) Plus, moto g(6) play, Moto G (5S), moto e5 play, moto e(5) play, moto e(5) cruise, Moto E4, Moto Z Play, Moto G (5th Gen)
Nokia Nokia 8, Nokia 7 plus, Nokia 6.1, Nokia 8 Sirocco, Nokia X6, Nokia 3.1
OnePlus OnePlus 6, OnePlus5T, OnePlus3T, OnePlus5, OnePlus3
Oppo CPH1803, CPH1821, CPH1837, CPH1835, CPH1819, CPH1719, CPH1613, CPH1609, CPH1715, CPH1861, CPH1831, CPH1801, CPH1859, A83, R9s Plus
Positivo Twist, Twist Mini
Samsung Galaxy A8 Star, Galaxy J7 Star, Galaxy Jean, Galaxy On6, Galaxy Note9, Galaxy J3 V, Galaxy A9 Star, Galaxy J7 V, Galaxy S8 Active, Galaxy Wide3, Galaxy J3 Eclipse, Galaxy S9+, Galaxy S9, Galaxy A9 Star Lite, Galaxy J7 Refine, Galaxy J7 Max, Galaxy Wide2, Galaxy J7(2017), Galaxy S8+, Galaxy S8, Galaxy A3(2017), Galaxy Note8, Galaxy A8+(2018), Galaxy J3 Top, Galaxy J3 Emerge, Galaxy On Nxt, Galaxy J3 Achieve, Galaxy A5(2017), Galaxy J2(2016), Galaxy J7 Pop, Galaxy A6, Galaxy J7 Pro, Galaxy A6 Plus, Galaxy Grand Prime Pro, Galaxy J2 (2018), Galaxy S6 Active, Galaxy A8(2018), Galaxy J3 Pop, Galaxy J3 Mission, Galaxy S6 edge+, Galaxy Note Fan Edition, Galaxy J7 Prime, Galaxy A5(2016)
Sharp シンプルスマホ4, AQUOS sense plus (SH-M07), AQUOS R2 SH-03K, X4, AQUOS R SH-03J, AQUOS R2 SHV42, X1, AQUOS sense lite (SH-M05)
Sony Xperia XZ2 Premium, Xperia XZ2 Compact, Xperia XA2, Xperia XA2 Ultra, Xperia XZ1 Compact, Xperia XZ2, Xperia XZ Premium, Xperia XZ1, Xperia L2, Xperia X
Tecno F1, CAMON I Ace
Vestel Vestel Z20
Vivo vivo 1805, vivo 1803, V9 6GB, Y71, vivo 1802, vivo Y85A, vivo 1726, vivo 1723, V9, vivo 1808, vivo 1727, vivo 1724, vivo X9s Plus, Y55s, vivo 1725, Y66, vivo 1714, 1609, 1601
Vodafone Vodafone Smart N9
Xiaomi Mi A2, Mi A2 Lite, MI 8, MI 8 SE, MIX 2S, Redmi 6Pro, Redmi Note 5 Pro, Redmi Note 5, Mi A1, Redmi S2, MI MAX 2, MI 6X
ZTE BLADE A6 MAX

Thank you to everyone internally and externally who helped make Android safer and stronger in the past year. Together, we made a huge investment in security research that helps Android users everywhere. If you want to get involved to make next year even better, check out our detailed program rules. For tips on how to submit complete reports, see Bug Hunter University.

September 20th 2018, 12:32 pm

Notifying your users with FCM

Android Developers Blog

Posted by Jingyu Shi, Developer Advocate, Partner Devrel

This is the second in a series of blog posts in which outline strategies and guidance in Android with regard to power.

Notifications are a powerful channel you can use to keep your app's users connected and updated. Android provides Notification APIs to create and post notifications on the device, but quite often these notifications are triggered by external events and sent to your app from your app server.

In this blog post, we'll explain when and how to generate these remote notifications to provide timely updates to users and minimize battery drain.

Use FCM for remote notifications

We recommend using Firebase Cloud Messaging (FCM) to send remote notifications to Android devices. FCM is a free, cross-platform messaging solution that reliably delivers hundreds of billions of messages per day. It is primarily used to send remote notifications and to notify client applications that data is available to sync. If you still use Google Cloud Messaging (GCM) or the C2DM library , both of which are deprecated, it's time to upgrade to FCM!

There are two types of FCM messages you can choose from:

You can set the priority to either high or normal on the data messages. You can find out more about FCM messages and message handling in this blog post on Firebase Blog.

FCM is optimized to work with Android power management features. Using the appropriate message priority and type helps you reach your users in a timely manner, and also helps save their battery. Learn more about power management features in this blog post: "Moar Power in P and the future".

To notify or not?

All of the notifications that you send should be well-structured and actionable, as well as provide timely and relevant information to your users. We recommend that you follow these notification guidelines, and avoid spamming your users. No one wants to be distracted by irrelevant or poorly-structured notifications. If your app behaves like this, your users may block the notifications or even uninstall your app.

The When not to use a notification section of the Material Design documentation for notifications highlights cases where you should not send your user a notification. For example, a common use case for a normal priority FCM Data Message is to tell the app when there's content ready for sync, which requires no user interaction. The sync should happen quietly in the background, with no need for a notification, and you can use the WorkManager1 or JobScheduler API to schedule the sync.

Post a notification first

If you are sending remote notifications, you should always post the notification as soon as possible upon receiving the FCM message. Adding any additional network requests before posting a notification will lead to delayed notifications for some of your users. When not handled properly, the notifications might not be seen at all, see the "avoid background service" section below.


⚠️ Avoid adding any additional network requests before posting a notification

Also keep in mind that, depending on the state of the device, user actions, and app behavior, one or many power saving features could be restricting your app's background work. As a result, your app's jobs and alarms might be delayed, and its ability to access the network might be restricted.

For all of these reasons, to ensure timely delivery of the notification, you should always show the notification promptly when the FCM message is received, before any other work like network fetch or scheduling jobs.

FCM message payload is your friend

To post a notification upon the receipt of an FCM message, you should include all the data needed for the notification in the FCM message payload.

The same applies to data sync--we recommend that your app send as much data as possible in the FCM payload and, if needed, load the remainder of the data when the app opens. On a well-performing network, there's a good chance that the data will be synced by the time the user opens the app so the spinner won't be shown to the user. If network connectivity is not good, a notification will be sent to the user with the content in the FCM payload to inform the user in a timely manner. The user can then open the app to load all the data.

You can also encrypt FCM messages end-to-end using libraries like Capillary. The image below shows a general flow of how to handle FCM messages.

Need more data?

As convenient as FCM message payload is, it comes with a 4KB maximum limit. If you need to send a rich notification with an image attachment, or you want to improve your user experience by keeping your app in sync with media content, you may need more than the 4KB payload limit. For this, we recommend using FCM messages in combination with the WorkManager 1 or JobScheduler API.

If you need to post a rich notification, we recommend posting the notification first, with some of the content in the FCM message. Then schedule a job to fetch the remainder of the content. Once the job is finished, update the notification if it is still active. For example, you can include a thumbnail or preview of the content in the FCM payload and post it in the notification first. Then schedule a job to fetch the rest of the media files. Be aware that if you've scheduled jobs from the FCM message handler, it is possible that when the user launches the app, the scheduled job won't have finished yet. You should handle this case gracefully.

In short, use the data in the FCM message payload to post a notification and keep your app content updated first. If you still need more data, then schedule jobs with APIs like WorkManager 1 or JobScheduler API.

Avoid background services

One common pitfall is using a background service to fetch data in the FCM message handler, since background service will be stopped by the system per recent changes to Google Play Policy (Starting late 2018, Google Play will require a minimum target API level ).

Android 9 Pie will also impose background execution limits when battery saver is on. Starting a background service will lead to IllegalStateException from a normal priority FCM message. High priority messages do grant you a short whitelist window that allows you to start a background service. However, starting a background service with a network call will put the service at risk of getting terminated by the system, because the short execution window is only intended to be used for posting a notification.

You should avoid using background services but use WorkManager 1 or JobScheduler API instead to perform operations in the background.

Power & message priority

Android 6 Marshmallow introduced Doze. FCM is optimized to work with Doze, and you can use high priority FCM messages to notify your users immediately. In Doze mode, normal priority messages are deferred to a maintenance window. This enables the system to save battery when a device is idle, but still ensure users receive time-critical notifications. Consider an instant messaging app that sends users messages from friends or incoming phone calls or a home monitoring app sends users alarm notifications. These are some of the acceptable examples where you can use high priority FCM messages.

In addition, Android 9 Pie introduced App Standby Buckets and App Restrictions.

The table below shows how various power-management features affect message delivery behaviors.

High priority message delivery Normal priority message delivery
App in Foreground Immediate, unless app is restricted (see below) Immediate, unless app is restricted (see below)
App in Background
Device in Doze (M+) and Doze "on the go" (N+) Immediate Deferred until maintenance window
App Standby Buckets (P+) May be restricted No restriction
App Restrictions (P+) All messages dropped (see below) All messages dropped (see below)
Battery Saver No restriction No restriction


★ Note: Starting January 2019, App Restrictions (in Battery Setting) will include restrictions on FCM messages. You can find out if your app is in the restricted state with the isBackgroundRestricted API. Once your app is in the restricted state, no FCM messages will be delivered to the app at all. This will apply to both high and normal priority FCM messages and when app is in either foreground or background.

App Standby Buckets impose different levels of restrictions based on the app's standby bucket. Based on which bucket your app belongs to, there might be a cap for the number of high priority messages you are allowed to send per day. Once you reach the cap, any subsequent high priority messages will be downgraded to normal priority. See more details in the power management restrictions.

High priority FCM messages are designed to send remote notifications or trigger actions that involve user interactions. As long as you always use high priority messages for these purposes, your high priority messages will be delivered immediately and remote notifications will be displayed without delay. In addition, when a notification from a high priority message causes a user to open your app, the app gets promoted to the active bucket, which exempts it from FCM caps. The example below shows an instant messaging app moving to the active bucket after the user taps on a notification triggered by a high priority FCM message.

However, if you use high priority messages to send notifications to the blocked notification channels or tasks which do not involve user interactions, you will run the risk of wasting the high priority messages allocated in your app's bucket. Once reaching the cap, you won't be able to send urgent notifications anymore.

In summary, you should only use high priority FCM messages to deliver immediate, time-critical notifications to users. Doing so will ensure these messages and subsequent high priority messages reach your users without getting downgraded. You should use normal priority messages to trigger events that do not require immediate execution, such as a notification that is not time-sensitive or a data sync in the background.

Test with Android 9!

We highly recommend that you test your apps under all of the power management features mentioned above. To learn more about handling FCM messages on Android in your code, visit the Firebase blog.

Thank you for helping move the ecosystem forward, making better Android apps, and saving users' batteries!

Acknowledgements: This blog posts is in joint collaboration with FCM and Android teams.

1 WorkManager is the recommended solution for background processing once it's stable.

September 18th 2018, 1:59 pm

Moar Power in Android 9 Pie and the future

Android Developers Blog

Posted by Madan Ankapura, Product Manager, Android

This is the first in a series of blog posts that outline strategies and guidance in Android with regard to power.

Your users care a lot about battery -- if it runs out too quickly, it means they can't use your apps. Being a good steward of battery power is an important part of your relationship with the user, and we're continuing to add features to the platform that can help you accomplish this.

As part of our announced Play policy about improving app security and performance, an app's target API level must be no more than one year older than the current Android release. Keeping the target API level current will ensure that apps can take advantage of security and performance enhancements offered in the latest platform releases. When you update your app's target API level, it's important that you evaluate your background and foreground needs, which could have a significant impact on power & performance.

Past releases of Android included a number of features that helped manage battery life better, like:

In Android 9 Pie, we made further improvements based on these three principles:

  1. Developers want to build cool apps
  2. Apps need to be power-efficient
  3. Users don't want to be bothered to configure app settings

This means that the OS needs to be smarter and adapt to user preferences while improving the battery life of the device. To address these needs, we have introduced App Standby Buckets, Background Restrictions, and improved Battery Saver. Please test your app with these features enabled on a device running Android 9 Pie.

Battery Saver and Doze operate on a device-wide level, while Adaptive Battery (app standby buckets powered by a Deepmind ML model) and background restrictions operate on a per-app basis. The diagram below helps understand when a scheduled work will run.

As you update your apps to target Oreo or above, please review this checklist and follow the below table for background work

Currently Using Porting to Oreo
JobScheduler JobScheduler
Firebase JobDispatcher Firebase JobDispatcher
Background Service Jobscheduler
Foreground Service Foreground Service with action to STOP service

Note: when the WorkManager API becomes stable, we will be recommending WorkManager for most of these use cases

We recommend the following strategy given the importance for app developers to invest in the right design patterns and architecture:

  1. Do the needed work when the user is actively using the app
  2. Make any work/task that is done in the background deferrable
  3. Use foreground services but provide an action in the notification so user can stop the foreground service

Similarly, other OS primitives like alarms, network, and FCM messages also have constraints that are described in the developer documentation on power-management restrictions. You can learn more about each of these features via Google I/O presentation, DevByte and additional power optimization developer documentation.

We will be publishing a series of design pattern guidances in the upcoming weeks. Stay tuned.

Acknowledgements: This series of blog posts is in joint collaboration with Android Framework and DevRel teams.

September 11th 2018, 4:35 pm

Staged releases allow you to bring new features to your users quickly, safely and regularly.

Android Developers Blog

Posted by Peter Armitage, Software Engineer, Google Play

Releasing a new version of your app is an exciting moment when your team's hard work finally gets into the hands of your users. However, releasing can also be challenging - you want to keep your existing users happy without introducing performance regressions or bugs. At Google I/O this year, we talked about staged releases as an essential part of how Google does app releases, allowing you to manage the inherent risks of a new release by making a new version of your app available to just a fraction of your users. You can then increase this fraction as you gain confidence that your new version works as expected. We are excited that starting today staged releases will be possible on testing tracks, as well as the production track.

We will take a closer look at how staged releases work, and how you can use them as part of your release process.

Advantages of a staged release

The first benefit of a staged release is that it only exposes a fraction of your users to the new version. If the new version contains a bug, only a small number of people will be inconvenienced by it. This is much safer than releasing a new version to all of your users at once.

Another benefit is that if you discover a bug, you can halt the rollout, preventing any new users from downloading that version. Instead, they will receive the previous version.

These capabilities should relieve a lot of the uncertainty of rolling out a new version. And that will allow you to do it more often. We encourage releasing versions of a server more often because it reduces the changes between those versions, allowing you to more easily test and troubleshoot. The same principle applies to apps, though there will be a delay before most of your users upgrade to the latest version.

Staged releases as part of your normal release process

Let's look at a typical release process for an app with 100,000 users.

  1. Every Monday the developer builds a new version of the app from the latest version of the code that passes the automatic tests. They push the new release to Google Play's internal test track, and their QA team immediately starts testing it manually. Any bugs they find can be fixed and a new version can be built and pushed for them to re-check.
  2. On Tuesday, if the QA team have approved the latest release, it can be promoted to the app's alpha track. All the employees at the company have opted in to testing. Once the new release is pushed to the alpha track, the employees can download the new version. They can do this manually, or they may have auto-updates enabled, in which case they will probably update within a few hours.
  3. On Wednesday, if there are no reported issues with the release, they can promote the release to the production track and start a rollout at 10%. This means 10,000 users will have the opportunity to upgrade. Some will upgrade immediately, others will wait. The 10% of users that receive the app first are randomly selected, and the users will be randomly chosen each week.
  4. On Thursday, the developer checks the Play Console to see their crash reports, Android vitals, and feedback. If these all look good they can increase the rollout to 100%. All users will be able to upgrade to the new version.
  5. On Friday, the developer doesn't change anything, to ensure a stress-free weekend!

For big apps and small apps

Some apps are just starting out, and although there's no QA team, it's still worth testing the app on a few different devices before releasing it. Instead of having a track for employees, the developer has added their friends and family, who can contact them if they see an issue.

When an app gets larger and uses the open testing track, it may have 5,000 testers. These testers won't give public feedback on the Play store, but will be able to give feedback to the developer directly. If this app has 1 million users, they may first release to 1%, before going to 10%, then 100%.

Once an app becomes very popular, it could have over 100,000 testers. In that case the developer is now able to do a staged release on their testing track.

How to bounce back from issues

Bugs happen, and if you discover a problem with your new version you may want to halt the release. This will stop users from getting the new version, either by upgrading or installing for the first time. However, those who have already got the new version will not downgrade.

If the issue was not in the app itself, but on a server that the app communicates with, it may be best to fix the issue in the server, then resume the release. Resuming it allows some fraction of your users to access the new version again. This is the same set of users that were able to download the release before it was halted.

If the issue was in the app, you will have to fix it and release a new version. Or alternatively, you may choose to rebuild the previous version with a higher version code. Then you can start a staged release to the same set of users that the previous release went to.

API support

Staged releases are supported in v3 of the Play Console API on all tracks. Mark a release as "inProgress" and set a fraction of the population to target. For instance, to start a staged release to 5%:

{
  "releases": [{
      "versionCodes": ["99"],
      "userFraction": 0.05,
      "status": "inProgress"
  }]
}

Alternatively, if you release using the UI, it will suggest a fraction.

What next?

We hope you find these features useful and take advantage of them for successful updates with Google Play. If you're interested in some of the other great tools for distributing your apps, check out the I/O 2018 sessions, and learn more about test tracks and staged updates.

How useful did you find this blogpost?

September 5th 2018, 1:18 pm

Make the most of Notifications with the redesigned Wear OS by Google

Android Developers Blog

Posted by Hoi Lam, Lead Developer Advocate, Wear OS by Google

Today we announced that we are evolving the design of Wear OS by Google to help you get the most out of your time - providing quicker access to your information and notifications. Notifications can come from the automatic bridging of the phone's notification or be generated by a local Wear app running on the watch. Whether you are a phone developer, a Wear app developer, or both, there are a few things you will need to know about the new notification stream.

The new notification stream

Until now, each notification took up the entire screen in Wear OS. Although this provided more space to include things like inline action, it also meant it took a long time for the user to go through all their notifications. The new notification stream is more compact, and can display multiple notifications on the same screen. This means users can process their notification streams more quickly.

What this means for developers

As always, the current best practices for notification still apply. In particular, for messaging apps developers, we strongly encourage the use of MessagingStyle notification and enabling on-device Smart Reply through setAllowGeneratedReplies.

We will start rolling these changes out in the next month, so watch for updates on your Wear OS by Google smartwatch!

August 29th 2018, 1:23 pm

Verifying your Google Assistant media action integrations on Android

Android Developers Blog

Posted by Nevin Mital, Partner Developer Relations

The Media Controller Test (MCT) app is a powerful tool that allows you to test the intricacies of media playback on Android, and it's just gotten even more useful. Media experiences including voice interactions via the Google Assistant on Android phones, cars, TVs, and headphones, are powered by Android MediaSession APIs. This tool will help you verify your integrations. We've now added a new verification testing framework that can be used to help automate your QA testing.

The MCT is meant to be used in conjunction with an app that implements media APIs, such as the Universal Android Music Player. The MCT surfaces information about the media app's MediaController, such as the PlaybackState and Metadata, and can be used to test inter-app media controls.

The Media Action Lifecycle can be complex to follow; even in a simple Play From Search request, there are many intermediate steps (simplified timeline depicted below) where something could go wrong. The MCT can be used to help highlight any inconsistencies in how your music app handles MediaController TransportControl requests.

Previously, using the MCT required a lot of manual interaction and monitoring. The new verification testing framework offers one-click tests that you can run to ensure that your media app responds correctly to a playback request.

Running a verification test

To access the new verification tests in the MCT, click the Test button next to your desired media app.

The next screen shows you detailed information about the MediaController, for example the PlaybackState, Metadata, and Queue. There are two buttons on the toolbar in the top right: the button on the left toggles between parsable and formatted logs, and the button on the right refreshes this view to display the most current information.

By swiping to the left, you arrive at the verification tests view, where you can see a scrollable list of defined tests, a text field to enter a query for tests that require one, and a section to display the results of the test.

As an example, to run the Play From Search Test, you can enter a search query into the text field then hit the Run Test button. Looks like the test succeeded!

Below are examples of the Pause Test (left) and Seek To test (right).

Android TV

The MCT now also works on Android TV! For your media app to work with the Android TV version of the MCT, your media app must have a MediaBrowserService implementation. Please see here for more details on how to do this.

On launching the MCT on Android TV, you will see a list of installed media apps. Note that an app will only appear in this list if it implements the MediaBrowserService.

Selecting an app will take you to the testing screen, which will display a list of verification tests on the right.

Running a test will populate the left side of the screen with selected MediaController information. For more details, please check the MCT logs in Logcat.

Tests that require a query are marked with a keyboard icon. Clicking on one of these tests will open an input field for the query. Upon hitting Enter, the test will run.

To make text input easier, you can also use the ADB command:

adb shell input text [query]

Note that '%s' will add a space between words. For example, the command adb shell input text hello%sworld will add the text "hello world" to the input field.

What's next

The MCT currently includes simple single-media-action tests for the following requests:

For a technical deep dive on how the tests are structured and how to add more tests, visit the MCT GitHub Wiki. We'd love for you to submit pull requests with more tests that you think are useful to have and for any bug fixes. Please make sure to review the contributions process for more information.

Check out the latest updates on GitHub!

August 27th 2018, 3:02 pm

Exclusive new organic acquisition insights on the Google Play Console

Android Developers Blog

Posted by Tom Grinsted, Product Manager, Google Play

We've updated the Play Console acquisition reports to give new insights into what users do on the Play Store to discover your app. It's a great way to super-charge your App Store Optimization (ASO) and onboarding experience.

One of the things every developer wants to know is how people discover their app or game. User acquisition reports in the Google Play Console are a great way to understand this. For many apps and games, a stand-out source is Organic traffic — it's usually the largest or second largest source of store listing visits and installs.

Organic traffic is made up of people who come to your store listing while exporting or searching the Play Store. These visitors might find your app in a seasonal collection, from featuring, or while searching for a specific use case or term.

Until recently, this traffic has been bundled together with no breakdown of data into user behavior. With our latest updates we have changed this by introducing new and exclusive acquisition insights to the Google Play Console. These enable you to understand what people in the Play Store do to discover your app or game. They reveal how many people discover your app through exploring the store, and how many search to find your app, and even the search terms they use!

App Store Optimization (ASO) is vital to driving your organic traffic and this update enables you to do this with more data and better understanding.

A new data breakdown

When you visit the user acquisition report, the first change you'll notice is that organic traffic is broken down. This breakdown means you can see how people arrive at your store listing by searching or exploring (actions that aren't search like browsing the homepage, visiting a category list, or viewing related apps).

This change has been of immediate benefit to developers, enabling their growth teams to optimize acquisition strategies. For example, Scopely found that:

"Isolating [explore] from search and then a deeper dive into search gives the whole organic picture. It allows us to focus on acquisition areas that really matter." Dorothee Pinlet, VP Partnerships, Scopely


Click through for more insights

From the new search row, you can click-through to see the aggregate number of people using different search terms to find your store listing, and which of those lead to the most installs. This breakdown is a view into the Play Store that has not been available before.

Our pilot partners, who helped us refine the feature ahead of launch, were very happy with how this data has helped them make more informed decisions.

For example, at Fun games for free:

"We were impressed by the relevance of the long tail searches."
Guilherme Major, Head of Organic Distribution and Business Development, Fun Games for Free

While Evernote found that the breakdown:

"... offers surprising and actionable insights about the effectiveness of search terms in driving installs and retained users."
May Allen, Product Manager, Evernote

Some partners changed their in-app onboarding experience to highlight features that reflected the search terms that were driving installs, to better meet user expectations. While others evaluated if their influencer marketing was having an impact by looking for their advocates' names in the search results after adding them to descriptions.

Better coverage

The new organic data also includes information about when people visiting the Play Store saw previews of your listings, not just when they visited your full page. People see these previews when they make certain searches, such as searching directly for a brand or app name. As well as more generally in some markets. This new information gives you more visibility into where people see your assets. It helps you decide how to optimize these assets, for instance by ensuring that your screenshots are impactful. And when you come to do that, you've got Store Listing Experiments.

This change means that your total reported visits and installs are likely to increase as of July 30, 2018. This increase is because previews will be counted as listing views, previously they were included in the category "Installs without store listing visits".

Putting the data to work

The developers who had the opportunity to test Organic breakdowns have given feedback that they loved them. They've also been kind enough to share some insights into how they plan to use the data. Perhaps these thoughts on how to use the data will spark some ideas for your business.

Some developers will be using this new data to evaluate their acquisition strategies by looking at the breakdown between explore and search. They will use this breakdown to evaluate the impact of exploring behaviors, especially around times when the app has been featured on the Play Store.

Using the information about popular search terms, several developers plan to change their app or game's Google Play listing to reflect user interests better. This change involves adjusting the descriptions and screenshots to tie more directly into the top search terms.

Others plan to use the insight provided by search term information to optimize their in-app onboarding. Here they plan to make sure that the onboarding talks about the features related to the most popular searches people made when discovering their app or game, highlighting and reinforcing the benefits.

Final word

Our team is always thinking about the tools we can build to help you optimize the discovery and installation of your app or game from the Play Store. Organic breakdowns is just one of these tools, a new way to help drive your success. Ultimately, your success is what we work towards. Organic breakdowns give you a more comprehensive picture of how people discover you on the Play Store so you can optimize your store presence, turning more visits into installs, and more installs into engaged users.

How useful did you find this blog post?

August 24th 2018, 12:30 pm

Evolution of Android Security Updates

Android Developers Blog

Posted by Dave Kleidermacher, VP, Head of Security - Android, Chrome OS, Play

At Google I/O 2018, in our What's New in Android Security session, we shared a brief update on the Android security updates program. With the official release of Android 9 Pie, we wanted to share a more comprehensive update on the state of security updates, including best practice guidance for manufacturers, how we're making Android easier to update, and how we're ensuring compliance to Android security update releases.

Commercial Best Practices around Android Security Updates

As we noted in our 2017 Android Security Year-in-Review, Android's anti-exploitation strength now leads the mobile industry and has made it exceedingly difficult and expensive to leverage operating system bugs into compromises. Nevertheless, an important defense-in-depth strategy is to ensure critical security updates are delivered in a timely manner. Monthly security updates are the recommended best practice for Android smartphones. We deliver monthly Android source code patches to smartphone manufacturers so they may incorporate those patches into firmware updates. We also deliver firmware updates over-the-air to Pixel devices on a reliable monthly cadence and offer the free use of Google's firmware over-the-air (FOTA) servers to manufacturers. Monthly security updates are also required for devices covered under the Android One program.

While monthly security updates are best, at minimum, Android manufacturers should deliver regular security updates in advance of coordinated disclosure of high severity vulnerabilities, published in our Android bulletins. Since the common vulnerability disclosure window is 90 days, updates on a 90-day frequency represents a minimum security hygiene requirement.

Enterprise Best Practices

Product security factors into purchase decisions of enterprises, who often consider device security update cadence, flexibility of policy controls, and authentication features. Earlier this year, we introduced the Android Enterprise Recommended program to help businesses make these decisions. To be listed, Android devices must satisfy numerous requirements, including regular security updates: at least every 90 days, with monthly updates strongly recommended. In addition to businesses, consumers interested in understanding security update practices and commitment may also refer to the Enterprise Recommended list.

Making Android Easier to Update

We've also been working to make Android easier to update, overall. A key pillar of that strategy is to improve modularity and clarity of interfaces, enabling operating system subsystems to be updated without adversely impacting others. Project Treble is one example of this strategy in action and has enabled devices to update to Android P more easily and efficiently than was possible in previous releases. The modularity strategy applies equally well for security updates, as a framework security update can be performed independently of device specific components.

Another part of the strategy involves the extraction of operating system services into user-mode applications that can be updated independently, and sometimes more rapidly, than the base operating system. For example, Google Play services, including secure networking components, and the Chrome browser can be updated individually, just like other Google Play apps.

Partner programs are a third key pillar of the updateability strategy. One example is the GMS Express program, in which Google is working closely with system-on-chip (SoC) suppliers to provide monthly pre-integrated and pre-tested Android security updates for SoC reference designs, reducing cost and time to market for delivering them to users.

Security Patch Level Compliance

Recently, researchers reported a handful of missing security bug fixes across some Android devices. Initial reports had several inaccuracies, which have since been corrected. We have been developing security update testing systems that are now making compliance failures less likely to occur. In particular, we recently delivered a new testing infrastructure that enables manufacturers to develop and deploy automated tests across lower levels of the firmware stack that were previously relegated to manual testing. In addition, the Android build approval process now includes scanning of device images for specific patterns, reducing the risk of omission.

Looking Forward

In 2017, about a billion Android devices received security updates, representing approximately 30% growth over the preceding year. We continue to work hard devising thoughtful strategies to make Android easier to update by introducing improved processes and programs for the ecosystem. In addition, we are also working to drive increased and more expedient partner adoption of our security update and compliance requirements. As a result, over coming quarters, we expect the largest ever growth in the number of Android devices receiving regular security updates.

Bugs are inevitable in all complex software systems, but exploitability of those bugs is not. We're working hard to ensure that the incidence of potentially harmful exploitation of bugs continues to decline, such that the frequency for security updates will reduce, not increase, over time. While monthly security updates represents today's best practice, we see a future in which security updates becomes easier and rarer, while maintaining the same goal to protect all users across all devices.

August 22nd 2018, 1:57 pm

Streamlining the developer experience for instant games

Android Developers Blog

Posted by Vlad Zavidovych, Software Engineer; Artem Yudin, Software Engineer

Google Play Instant enables people to experience your game or app natively without having to go through a full installation process. Removing the friction of installing is a great way to increase engagement, conversions, and lifetime value of your users.

Today, we've made it easier to build instant games and apps by removing the URL requirement. Previously, in order to publish an instant game you had to create a web destination for it. The website also had to be connected to the instant game through intent filters and digital asset links verification.

Now, it is no longer required to add URL-based intent filters to your instant game. People will be able to access the instant experience through a 'Try Now' button in the Play Store or Play Games apps, via deep link API, and in the future through the app ads.

While being particularly helpful for games which often don't have a corresponding website, the new URL-less functionality is available to both game and app developers.

How to develop and publish an instant game without adding URL support

Game developers using Unity or the latest Cocos Creator can take advantage of URL-less instant games by simply leaving the URL fields blank in the setup process.

However, if you have your own game engine or have built your game from scratch in C++, check the AndroidManifest to make sure it has the following intent filter declaration:

<intent-filter>
   <action android:name="android.intent.action.MAIN" />
   <category android:name="android.intent.category.LAUNCHER" />
</intent-filter>

Starting with Android Studio 3.2, you can create a new instant game, or convert your existing game, without associating a URL with it. In fact, this is now the default behavior. Here is a run through the process:

  1. First, make sure you're running Android Studio 3.2 or newer by either updating or downloading it here. Make sure to install Instant Apps Development SDK 1.3.0 or higher from Android SDK Manager.
  2. Then download a sample instant app from GitHub. In Android Studio, click File → New → Import Project… and import the downloaded "urlless" sample.
  3. Lastly, after gradle tasks are finished, click the green "Run" button with "instantapp" configuration.

You should see an instant game on your attached device. Instant runtime found and launched the entry point activity in your game with the ACTION_MAIN and CATEGORY_LAUNCHER intent filter.

Once you are ready to publish the sample instant game:

  1. Give your sample game a unique applicationId in app/build.gradle file by replacing existing applicationId - we don't want different applications with the same id.
  2. Generate signed APKs for both installable and instant version of our sample game.
    • In Android Studio, Build → Generate Signed Bundle / APK…
    • Choose APK for both "app" and "instantapp" modules.
  3. In the Play Console, create a new application, upload APK under "App Releases" tab, and then upload "instantapp-release.zip" under "Android Instant Apps" tab.
    • The installable app must be rolled out before the instant one.
  4. The rollout process may be familiar to most Android developers, but here's a step-by-step guide in case you run into any issues.

Once you publish your instant game, people can access it via a 'Try Now' button in Play Store within 24 hours or sooner. You can also send traffic to your instant game using the deep link API:

market://details?id=MY.PACKAGE.NAME&launch=true&referrer=myreferrer

MY.PACKAGE.NAME refers to applicationId that you have replaced in app/build.gradle file.

What's next?

With the launch of Android App Bundle we are excited to further simplify the developer experience for Google Play Instant. In the coming months we are making it possible to deliver your app's or game's dynamic features instantly from the same bundle as your installable app or game. Stay tuned!

Check out more information on Google Play Instant, or feel free to ask a question on Stack Overflow, or report an issue to our public tracker.

How useful did you find this blogpost?

August 20th 2018, 3:42 pm

Alternative input methods for Android TV

Android Developers Blog

Posted by Benjamin Baxter, Developer Advocate and Bacon Connoisseur

All TVs have the same problem with keyboard input: It is very cumbersome to hunt and peck for each letter using a D-pad with a remote. And if you make a mistake, trying to correct it compounds the problem.

APIs like Smart Lock and Autofill, can ease user's frustrations, but for certain types of input, like login, you need to collect complex input that is difficult for users using the on-screen keyboard.

With the Nearby Connections API, you can use a second screen to gather input from the user with less friction.

How Nearby Connections works

From the documentation:

"Nearby Connections is an offline peer-to-peer socket model for communication based on advertising and discovering devices in proximity.

Usage of the API falls into two phases: pre-connection, and post-connection.

In the pre-connection phase, Advertisers advertise themselves, while Discoverers discover nearby Advertisers and send connection requests. A connection request from a Discoverer to an Advertiser initiates a symmetric authentication flow that results in both sides independently accepting (or rejecting) the connection request.

After a connection request is accepted by both sides, the connection is established and the devices enter the post-connection phase, during which both sides can exchange data."

In most cases the TV is the advertiser and the phone is the discoverer. In the example below, the assumed second device is a phone. The API and patterns described in this article are not limited to a phone. For example, a tablet could also be the second screen device.

Login Example

There are many times when keyboard input is required. Authenticating users and collecting billing information (like zip codes and name on card) are common cases. This example handles a login flow that uses a second screen to see how Nearby Connections can help reduce friction.

1. The user opens your app on her TV and needs to login. You can show a screen of options similar to the setup flow for a new TV.

2. After the user chooses to login with their phone, the TV should start advertising and send the user to the associated login app on their phone, which should start discovering.

There are a variety of solutions to open the app on the phone. As an example, Android TV's setup flow has the user open the corresponding app on their mobile device. Initiating the hand-off is a more a UX concern than a technology concern.

3. The phone app should display the advertising TV and prompt the user to initiate the connection. After the (encrypted -- see Security Considerations below for more on this) connection is established the TV can stop advertising and the phone can stop discovering.

"Advertising/Discovery using Nearby Connections for hours on end can affect a device's battery. While this is not usually an issue for a plugged-in TV, it can be for mobile devices, so be conscious about stopping advertising and discovery once they're no longer needed."

4. Next, the phone can start collecting the user's input. Once the user enters their login information, the phone should send it to the TV in a BYTES payload over the secure connection.

5. When the TV receives the message it should send an ACK (using a BYTES payload) back to the phone to confirm delivery.

6. When the phone receives the ACK, it can safely close the connection.

The following diagram summarizes the sequence of events:

UX considerations

Nearby Connections needs location permissions to be able to discover nearby devices. Be transparent with your users. Tell them why they need to grant the location permission on their phone.

Since the TV is advertising, it does not need location permissions.

Start advertising: The TV code

After the user chooses to login on the phone, the TV should start advertising. This is a very simple process with the Nearby API.

override fun onGuidedActionClicked(action: GuidedAction?) {
    super.onGuidedActionClicked(action)
    if( action == loginAction ) {
        // Update the UI so the user knows to check their phone
        navigationFlowCallback.navigateToConnectionDialog()
        doStartAdvertising(requireContext()) { payload ->
            handlePayload(payload)
        }
    }
}

When the user clicks a button, update the UI to tell them to look at their phone to continue. Be sure to offer a way to cancel the remote login and try manually with the cumbersome onscreen keyboard.

This example uses a GuidedStepFragment but the same UX pattern applies to whatever design you choose.

Advertising is straightforward. You need to supply a name, a service id (typically the package name), and a `ConnectionLifeCycleCallback`.

You also need to choose a strategy that both the TV and the phone use. Since it is possible that the users has multiple TVs (living room, bedroom, etc) the best strategy to use is P2P_CLUSTER.

Then start advertising. The onSuccessListener and onFailureListener tell you whether or not the device was able to start advertising, they do not indicate a device has been discovered.

fun doStartAdvertising(context: Context) {
    Nearby.getConnectionsClient(context).startAdvertising(
        context.getString(R.string.tv_name),
        context.packageName,
        connectionLifecycleCallback,
        AdvertisingOptions.Builder().setStrategy(Strategy.P2P_CLUSTER).build()
    )
    .addOnSuccessListener {
        Log.d(LoginStepFragment.TAG, "We are advertising!")
    }
    .addOnFailureListener {
        Log.d(LoginStepFragment.TAG, "We cannot start advertising.")
        Toast.makeText(
            context, "We cannot start advertising.", Toast.LENGTH_LONG)
                .show()
    }
}

The real magic happens in the `connectionLifecycleCallback` that is triggered when devices start to initiate a connection. The TV should accept the handshake from the phone (after performing the necessary authentication -- see Security Considerations below for more) and supply a payload listener.

val connectionLifecycleCallback = object : ConnectionLifecycleCallback() {

    override fun onConnectionInitiated(
            endpointId: String, 
            connectionInfo: ConnectionInfo
    ) {
        Log.d(TAG, "Connection initialized to endpoint: $endpointId")
        // Make sure to authenticate using `connectionInfo.authenticationToken` 
        // before accepting
        Nearby.getConnectionsClient(context)
            .acceptConnection(endpointId, payloadCallback)
    }

    override fun onConnectionResult(
        endpointId: String, 
        connectionResolution: ConnectionResolution
    ) {
        Log.d(TAG, "Received result from connection: ${connectionResolution.status.statusCode}")
        doStopAdvertising()
        when (connectionResolution.status.statusCode) {
            ConnectionsStatusCodes.STATUS_OK -> {
                Log.d(TAG, "Connected to endpoint: $endpointId")
                otherDeviceEndpointId = endpointId
            }
            else -> {
                otherDeviceEndpointId = null
            }
        }
    }

    override fun onDisconnected(endpointId: String) {
        Log.d(TAG, "Disconnected from endpoint: $endpointId")
        otherDeviceEndpointId = null
    }
}

The payloadCallback listens for the phone to send the login information needed. After receiving the login information, the connection is no longer needed. We go into more detail later in the Ending the Conversation section.

Discovering the big screen: The phone code

Nearby Connections does not require the user's consent. However, the location permission must be granted in order for discovery with Nearby Connections to work its magic. (It uses BLE scanning under the covers.)

After opening the app on the phone, start by prompting the user for location permission if not already granted on devices running Marshmallow and higher.

Once the permission is granted, start discovering, confirm the connection, collect the credentials, and send a message to the TV app.

Discovering is as simple as advertising. You need a service id (typically the package name -- this should be the same on the Discoverer and Advertiser for them to see each other), a name, and a `EndpointDiscoveryCallback`. Similar to the TV code, the flow is triggered by callbacks based on the connection status.

Nearby.getConnectionsClient(context).startDiscovery(
        context.packageName,
        mobileEndpointDiscoveryCallback,
        DiscoveryOptions.Builder().setStrategy(Strategy.P2P_CLUSTER).build()
        )
        .addOnSuccessListener {
            // We're discovering!
            Log.d(TAG, "We are discovering!")
        }
         .addOnFailureListener {
            // We were unable to start discovering.
            Log.d(TAG, "We cannot start discovering!")
        }

The Discoverer's listeners are similar to the Advertiser's success and failure listeners; they signal if the request to start discovery was successful or not.

Once you discover an advertiser, the `EndpointDiscoveryCallback` is triggered. You need to keep track of the other endpoint to know who to send the payload, e.g.: the user's credentials, to later.

val mobileEndpointDiscoveryCallback = object : EndpointDiscoveryCallback() {
    override fun onEndpointFound(
        endpointId: String, 
        discoveredEndpointInfo: DiscoveredEndpointInfo
    ) {
        // An endpoint was found!
        Log.d(TAG, "An endpoint was found, ${discoveredEndpointInfo.endpointName}")
        Nearby.getConnectionsClient(context)
            .requestConnection(
                context.getString(R.string.phone_name), 
                endpointId, 
                connectionLifecycleCallback)
    }

    override fun onEndpointLost(endpointId: String) {
        // A previously discovered endpoint has gone away.
        Log.d(TAG, "An endpoint was lost, $endpointId")
    }
}

One of the devices must initiate the connection. Since the Discoverer has a callback for endpoint discovery, it makes sense for the phone to request the connection to the TV.

The phone asks for a connection supplying a `connectionLifecycleCallback` which is symmetric to the callback in the TV code.

val connectionLifecycleCallback = object : ConnectionLifecycleCallback() {
    override fun onConnectionInitiated(
        endpointId: String,
        connectionInfo: ConnectionInfo
    ) {
        Log.d(TAG, "Connection initialized to endpoint: $endpointId")
        // Make sure to authenticate using `connectionInfo.authenticationToken` before accepting
        Nearby.getConnectionsClient(context)
                .acceptConnection(endpointId, payloadCallback)
    }

    override fun onConnectionResult(
        endpointId: String,
        connectionResolution: ConnectionResolution
    ) {
        Log.d(TAG, "Connection result from endpoint: $endpointId")
        when (connectionResolution.status.statusCode) {
            ConnectionsStatusCodes.STATUS_OK -> {
                Log.d(TAG, "Connected to endpoint: $endpointId")
                otherDeviceEndpointId = endpointId
                waitingIndicator.visibility = View.GONE
                emailInput.editText?.isEnabled = true
                passwordInput.editText?.isEnabled = true

                Nearby.getConnectionsClient(this).stopDiscovery()
            }
            else -> {
                otherDeviceEndpointId = null
            }
        }
    }

    override fun onDisconnected(endpointId: String) {
        Log.d(TAG, "Disconnected from endpoint: $endpointId")
        otherDeviceEndpointId = null
    }
}

Once the connection is established, stop discovery to avoid keeping this battery-intensive operation running longer than needed. The example stops discovery after the connection is established, but it is possible for a user to leave the activity before that happens. Be sure to stop the discovery/advertising in onStop() on both the TV and phone.


override fun onStop() {
    super.onStop()
    Nearby.getConnectionsClient(this).stopDiscovery()
}


Just like a TV app, when you accept the connection you supply a payload callback. The callback listens for messages from the TV app such as the ACK described above to clean up the connection.

After the devices are connected, the user can use the keyboard and send their authentication information to the TV by calling `sendPayload()`.

fun sendCreditials() {

    val email = emailInput.editText?.text.toString()
    val password = passwordInput.editText?.text.toString()

    val creds = "$email:$password"
    val payload = Payload.fromBytes(creds.toByteArray())
    Log.d(TAG, "sending payload: $creds")
    if (otherDeviceEndpointId != null) {
        Nearby.getConnectionsClient(this)
                .sendPayload(otherDeviceEndpointId, payload)
    }
}

Ending the conversation

After the phone sends the payload to the TV (and the login is successful), there is no reason for the devices to remain connected. The TV can initiate the disconnection with a simple shutdown protocol.

The TV should send an ACK to the phone after it receives the credential payload.

val payloadCallback = object : PayloadCallback() {
    override fun onPayloadReceived(endpointId: String, payload: Payload) {
        if (payload.type == Payload.Type.BYTES) {
            payload.asBytes()?.let {
                val body = String(it)
                Log.d(TAG, "A payload was received: $body")
                // Validate that this payload contains the login credentials, and process them.

                val ack = Payload.fromBytes(ACK_PAYLOAD.toByteArray())
                Nearby.getConnectionsClient(context).sendPayload(endpointId, ack)
            }
        }
    }

    override fun onPayloadTransferUpdate(
        endpointId: String,
        update: PayloadTransferUpdate
    ) {    }
}

The phone should have a `PayloadCallback` that initiates a disconnection in response to the ACK. This is also a good time to reset the UI to show an authenticated state.

private val payloadCallback = object : PayloadCallback() {
    override fun onPayloadReceived(endpointId: String, payload: Payload) {
        if (payload.type == Payload.Type.BYTES) {
            payload.asBytes()?.let {
                val body = String(it)
                Log.d(TAG, "A payload was received: $body")

                if (body == ACK_PAYLOAD) {
                    waitingIndicator.visibility = View.VISIBLE
                    waitingIndicator.text = getString(R.string.login_successful)
                    emailInput.editText?.isEnabled = false
                    passwordInput.editText?.isEnabled = false
                    loginButton.isEnabled = false

                    Nearby.getConnectionsClient(this@MainActivity)
                        .disconnectFromEndpoint(endpointId)
                }
            }
        }
    }

    override fun onPayloadTransferUpdate(
        endpointId: String,
        update: PayloadTransferUpdate
    ) {    }
}

Security considerations

For security (especially since we're sending over sensitive information like login credentials), it's strongly recommended that you authenticate the connection by showing a code and having the user confirm that the two devices being connected are the intended ones -- without this, the connection established by Nearby Connection is encrypted but not authenticated, and that's susceptible to Man-In-The-Middle attacks. The documentation goes into greater detail on how to authenticate a connection.

Does your app offer a second screen experience?

There are many times when a user needs to supply input to a TV app. The Nearby API provides a way to offload the hardships of an onscreen-dpad-driven keyboard to an easy and familiar phone keyboard.

What use cases do you have where a second screen would simplify your user's life? Leave a comment or send me (@benjamintravels) or Varun (@varunkapoor, Team Lead for Nearby Connections) a tweet to continue the discussion.

August 17th 2018, 6:39 pm

Streaming support spec for hearing aids on Android

Android Developers Blog

Posted by Seang Chau, Vice President, Engineering

According to the World Health Organization1, around 466 million people worldwide have disabling hearing loss. This number is expected to increase to 900 million people by the year 2050. Google is working with GN Hearing to create a new open specification for hearing aid streaming support on future versions of Android. Users with hearing loss will be able to connect, pair, and monitor their hearing aids so they can hear their phones loudly and clearly.

Hearing aid users expect a high quality, low latency experience with minimal impact on phone and hearing aid battery life. We've published a new hearing aid spec for Android smartphones: Audio Streaming for Hearing Aids (ASHA) on Bluetooth Low Energy Connection-Oriented Channels. ASHA is designed to have a minimal impact on battery life with low-latency while maintaining a high quality audio experience for users who rely on hearing aids. We look forward to continually evolving the spec to even better meet the needs of our users.

The spec details the pairing and connectivity, network topology, system architecture, and system requirements for implementing hearing aids using low energy connection-oriented channels. Any hearing aid manufacturer can now build native hearing aid support for Android.

The protocol specification is available here.

August 16th 2018, 9:09 am

Updating Wear OS Google Play Store policy to increase app quality

Android Developers Blog

Posted by Hoi Lam, Lead Developer Advocate, Wear OS by Google

Today we are announcing a new initiative to improve Wear app quality and their presentation in the Google Play Store. The Wear app review process, which has been in place since the launch of Android Wear 2.0, is currently optional. It will become mandatory for apps to be listed on the Wear OS by Google version of the Google Play Store from the following dates:

The review process for mobile apps remains unchanged, and is independent of the Wear app review. Mobile app updates will not be blocked if they fail the Wear app review.

We hope this lightweight app review process will improve the quality of Wear app experiences across the wide range of devices available to your users. In addition, since screenshots are required for the Wear app review, this will improve the discovery and presentation of your Wear apps in the Google Play Store.

See a comprehensive list of review criteria here. The following are common issues we see during Wear app reviews:

Opting out of app review for early prototypes

We understand that some developers need to experiment with their Wear apps in the early stages of app development, and a Wear app review at this stage might not be appropriate. In this case, developers have two options:

Please note that the open test and closed test channels will be subject to Wear app review to help front-load the quality assurance process and to avoid leaving reviews to the last minute.

Thank you for your continuing support of Wear OS by Google.

August 15th 2018, 2:22 pm

Google releases source for Google I/O 2018 for Android

Android Developers Blog

Posted by Shailen Tuli, DPE

Today we're releasing the source code for the official Google I/O 2018 for Android app.

The 2018 version constitutes a comprehensive rewrite of the app. For many years, the app has used a ContentProvider + SyncAdapter architecture. This year, we rewrote the app using Architecture Components and brought the code in sync with the Android team's current recommendations for building modern apps.

Architecture

We followed the recommendations laid out in the Guide to App Architecture for writing modular, testable and maintainable code when deciding on the architecture for the app. We kept logic away from Activities and Fragments and moved it to ViewModels. We observed data using LiveData and used the Data Binding Library to bind UI components in layouts to the app's data sources.

The overall architecture of the app can be summarized in this diagram:

We used a Repository layer for handling data operations. IOSched's data comes from a few different sources — user data is stored in Cloud Firestore (either remotely or in a local cache for offline use), user preferences and settings are stored in SharedPreferences, conference data is stored remotely and is fetched and stored in memory for the app to use — and the repository modules are responsible for handling all data operations and abstracting the data sources from the rest of the app. If we ever wanted to swap out the Firestore backend for a different data source in the future, our architecture allows us to do so in a clean way.

We implemented a lightweight domain layer, which sits between the data layer and the presentation layer, and handles discrete pieces of business logic off the UI thread. Examples.

We used Dagger2 for dependency injection and we heavily relied on dagger-android to abstract away boilerplate code.

We used Espresso for basic instrumentation tests and JUnit and Mockito for unit testing.

Firebase

The use of Firebase technologies has grown in the app as the Firebase platform has matured. The 2018 version uses the following Firebase components:

Kotlin

We made an early decision to rewrite the app from scratch to bring it in line with modern Android architecture. Using Kotlin for the rewrite was an easy choice: we loved Kotlin's expressive, concise, and powerful syntax; we found that Kotlin's support for safety features including nullability and immutability made our code more resilient; and we leveraged the enhanced functionality provided by Android Ktx extensions.

Material Design

At I/O 2018, the Material Design team announced Material Theming, giving apps much greater ability to customize Material Design to bring more of their product's brand. As we launched the app before Material Theming, we couldn't use all of the new components but we managed to sneak a couple in like the new Bottom App Bar with inset Floating Action Button and we were able to incorporate a lot of the conference's branding elements.

Future plans

The rewrite of the app brings the code in sync with Android's opinionated recommendations about building apps, and it resulted in a cleaner, more maintainable codebase. We'll continue working on the app, incorporating JetPack components as they become available and finding opportunities to showcase platform features that are good fits for the app. Developers can follow changes to the code on GitHub.

August 13th 2018, 1:37 pm

Looking forward with Google Play

Android Developers Blog

Posted by Purnima Kochikar, Director, Google Play, Apps & Games

On Monday we released Android 9 Pie. As we continue to push the Android platform forward, we're always looking to provide new ways to distribute your apps efficiently, help people discover and engage with your work, and improve the overall security of our ecosystem. Google Play has had a busy year so far with some big milestones around helping you reach more users, including:

Google Play is dedicated to helping you build and grow quality app businesses, reach the more than 2 billion Android devices globally and provide your users with better experiences. Here are some of the important areas we're prioritizing this year:

Innovative Distribution

We've added more testing tools to the popular Play Console to help developers de-risk app launches with internal and external test tracks and staged rollouts to get valuable early feedback. This year we've expanded the Start on Android program globally that provides developers new to Android additional guidance to optimize their apps before launch. Google Play Instant remains a huge bet to transform app discovery and improve conversions by letting users engage without the friction of installing. We're seeing great results from early adopters and are working on new places to surface instant experience, including ads, and making them easier to build throughout the year.

Improving App Quality

Google Play plays an important role helping developers understand and fix quality and performance issues. At I/O, we showcased how we expanded the battery, stability and rendering of Android vitals reporting to include app start time & permission denials, enabling developers to cut application not responding errors by up to 95%. We also expanded the functionality of automated device testing with the pre-launch report to enable games testing. Recently, we increased the importance of app quality in our search and discovery recommendations that has resulted in higher engagement and satisfaction with downloaded games.

Richer Discovery

Over the last year we've rolled out more editorial content and improved our machine learning to deliver personalized recommendations for apps and games that engage users. Since most game downloads come from browsing (as opposed to searching or deep linking into) the store, we've put particular focus on games discovery, with a new games home page, special sections for premium and new games, immersive video trailers and screenshots, and the ability to try games instantly. We've also introduced new programs to help drive app downloads through richer discovery. For example, since launching our app pre-registration program in 2016, we've seen nearly 250 million app pre-registrations. Going forward, we'll be expanding on these programs and others like LiveOps cards to help developers engage more deeply with their audience.

Expanding Commerce Platform

Google Play now collects payments in 150 markets via credit card, direct carrier billing (DCB), Paypal, and gift cards. Direct carrier billing is now enabled across 167 carriers in 64 markets. In 2018, we have focused on expanding our footprint in Africa and Latam with launches in Ghana, Kenya, Tanzania, Nigeria, Peru & Colombia. And users can now buy Google Play credit via gift cards or other means in more 800,000 retail locations around the world. This year, we also launched seller support in 18 new markets bringing the total markets with seller support to 98. Our subscription offering continues to improve with ML-powered fraud detection and even more control for subscribers and developers. Google Play's risk modeling automatically helps detect fraudulent transactions and purchase APIs help you better analyze your refund data to identify suspicious activity.

Maintaining a Safe & Secure Ecosystem

Google Play Protect and our other systems scan and analyze more than 50 billion apps a day to keep our ecosystem safe for users and developers. In fact, people who only download apps from Google Play are nine times less likely to download a potentially harmful app than those who download from other sources. We've made significant improvements in our ability to detect abuse—such as impersonation, inappropriate content, fraud, or malware—through new machine learning models and techniques. The result is that 99% of apps with abusive content are identified and rejected before anyone can install them. We're also continuing to run the Google Play Security Rewards Program through a collaboration with Hacker One to discover other vulnerabilities.

We are continually inspired by what developers build—check out #IMakeApps for incredible examples—and want every developer to have the tools needed to succeed. We can't wait to see what you do next!

August 8th 2018, 10:32 pm

Meet the first Indie Games Accelerator class

Android Developers Blog

Posted by Vineet Tanwar, Business Development Manager, Google Play

In June, we announced the Indie Games Accelerator, a new four month program to help indie game startups from India, Pakistan and Southeast Asia supercharge their growth on Android. We have been truly impressed by the overwhelming responses we have received, and the creativity that indie game developers from these regions have to offer.

We had a great time going through the applications and playing the games which were submitted for review. Now, it's finally time to announce the inaugural class of startups selected for the program who we will mentor and coach over the next few months. Here they are:

Congratulations to the selected participants and a huge thanks to everyone that applied! Find out more about the program or express your interest in joining next class of Indie Games Accelerator.

How useful did you find this blogpost?

August 8th 2018, 9:05 pm

Android Pie SDK is now more Kotlin-friendly

Android Developers Blog

Posted by James Lau, Product Manager (@jmslau)

When using the Java programming language, one of the most common pitfalls is trying to access a member of a null reference, causing a NullPointerException to be thrown. Kotlin offers protection against this by baking nullable and non-nullable types into the type system. This helps eliminate NullPointerExceptions from your code and improve your app's overall quality. When Kotlin code is calling into APIs written in the Java programming language, it relies on nullability annotations in those APIs to determine the nullability of each parameter and the return type. Unannotated parameters and return types are treated as platform types, which weakens the null-safety guarantee of Kotlin.

As part of yesterday's Android 9 announcement, we have also released a new Android SDK that contains nullability annotations for some of the most frequently used APIs. This will preserve the null-safety guarantee when your Kotlin code is calling into any annotated APIs in the SDK. Even if you are using the Java programming language, you can still benefit from these annotations by using Android Studio to catch nullability contract violations.

Not a breaking change

Normally, nullability contract violations in Kotlin result in compilation errors. But to ensure the newly annotated APIs are compatible with your existing code, we are using an internal mechanism provided by the Kotlin compiler team to mark the APIs as recently annotated. Recently annotated APIs will result only in warnings instead of errors from the Kotlin compiler. You will need to use Kotlin 1.2.60 or later.

Our plan is to have newly added nullability annotations produce warnings only, and increase the severity level to errors starting in the following year's Android SDK. The goal is to provide you with sufficient time to update your code.

How to use the "Kotlin-friendly" SDK

To get started, go to Tools > SDK Manager in Android Studio. Select Android SDK on the left menu, and make sure the SDK Platforms tab is open.

Use SDK Manager in Android Studio to install SDK for API Level 28 Revision 6

Check Android 8.+ (P) and click OK. This will install the Android SDK Platform 28 revision 6 if it is not already installed. After that, set your project's compile SDK version to API 28 to start using the new Android Pie SDK with nullability annotations.

Use the Project Structure Dialog to change your project's Compile Sdk Version to API 28

You may also need to update your Kotlin plugin in Android Studio if it's not already up-to-date. Make sure your Kotlin plugin version is 1.2.60 or later by going to Tools > Kotlin > Configure Kotlin Plugin Updates.

Once it's set up, your builds will start showing warnings if you have any code that violates nullability contracts in the Android SDK. An example of such a warning is shown below.

Sample warning from the Kotlin compiler when code violates a recently added nullability contract in the Android SDK.

You will also start seeing warnings in Android Studio's code editor if you call an Android API with the incorrect nullability. An example is shown below.

Android Studio warning about passing a null reference to a parameter annotated as a recently non-null type in the android.graphics.Path API.

Leveraging nullability annotations from the Java programming language

You can benefit from the new nullability annotations even if your code is in the Java programming language. By default, Android Studio will highlight any nullability contract violations with a warning, like the one below:

Android Studio showing a warning about nullability contract violation in code written in the Java programming language

To ensure that you have this inspection enabled, you can go to the IDE's settings page and search for "Constant conditions & exceptions" inspection and make sure that item is checked.

Use the Inspections page under Settings to ensure the Constant conditions & exceptions code inspection is enabled.

If you are using the Java programming language, nullability contract violations will not produce any compiler warning or error. Only the in-IDE code inspections are available to flag these issues.

You can also run code inspections across your entire project and see the aggregated results. Click on Analyze > Inspect Code… to start.

What's Next

The Android SDK API surface is very large, and we have only annotated a small percentage of the APIs so far - there is still lots of work remaining. Over the next several Android SDK releases, we will continue to add nullability annotations to the existing Android APIs, as well as making sure new APIs are annotated.

With the "Kotlin-friendly" Android SDK, the nullability annotations in AndroidX (part of the Jetpack family), and Android KTX, we are continuing to improve the Android APIs for developers using Kotlin. If you have not yet tried Kotlin, we encourage you to try it. Not only can Kotlin make your code more concise, it can also improve the stability of your apps.

Happy Kotlin-ing!

August 7th 2018, 1:31 pm

Introducing Android 9 Pie

Android Developers Blog

Posted by Dave Burke, VP of Engineering

After more than a year of development and months of testing by early adopters, we're ready to launch Android 9 Pie, the latest release of Android, to the world.

Android 9 harnesses the power of machine learning to make your phone smarter, simpler, and tailored to you. Read all about the new consumer features here. For developers, Android 9 includes many new ways to enhance your apps and build new experiences to drive engagement.

You've given us tons of feedback along the way--over a thousand bugs and feature requests--thank you! More than 140,000 of you tried our preview builds through the Android Beta program, and seven of our device maker partners also brought our Beta to their flagship devices, enabling users around the world to give their feedback too.

Today we're pushing the source code to Android Open Source Project (AOSP), and starting the Android 9 rollout to all Pixel users worldwide, with Android 9 coming to many more devices in the coming months.

We continue to move Android forward as the premier open platform for developers worldwide to build their businesses. With Android 9 -- together with the powerful new capabilities in Google Play for apps and games -- we're committed to helping you build great experiences, as well as reach and engage the right users safely and cost-effectively around the world.

What's in Android 9?

A smarter smartphone, with machine learning at the core

Android 9 helps your phone learn as you use it, by picking up on your preferences and adjusting automatically. Everything from helping users get the most out of their battery life to surfacing the best parts of the apps they use all the time, right when they need it most, Android 9 keeps things running smoother, longer.

Adaptive Battery

We partnered with DeepMind on a feature called Adaptive Battery that uses machine learning to prioritize system resources for the apps the user cares about most. If your app is optimized for Doze, App Standby, and Background Limits, Adaptive Battery should work well for you right out of the box. If you haven't yet taken optimized your app, make sure to check out the details in the power documentation to see how it works.

Slices

Slices can help users perform tasks faster by enabling engagement outside of the fullscreen app experience. It does this by using UI templates that can display rich, dynamic, and interactive content from your app from within the Google Search app and later in other places like the Google Assistant. You can learn more about building Slices to enhance your app here.

App Actions

App Actions is a new way to raise the visibility of your app and drive engagement. Actions take advantage of machine learning to surface your app to the user at just the right time, based on your app's semantic intents and the user's context.

We'll be sharing more details in the coming weeks on registering your app to handle one or more user intents, so your apps can be enabled for App Actions and surfaced across multiple Google and Android surfaces in response to user queries.

Text Classifier and Smart Linkify

We've extended the ML models that identify entities in content or text input to support more types like Dates and Flight Numbers through the TextClassifier API. Smart Linkify lets you take advantage of the TextClassifier models through the Linkify API, including enriched options for quick follow-on user actions. Smart Linkify also delivers significant improvements in accuracy of detection as well as performance.

Neural Networks API 1.1

Android 9 adds an updated version of the Neural networks API, to extend Android's support for accelerated on-device machine learning. Neural Networks 1.1 adds support for nine new ops -- Pad, BatchToSpaceND, SpaceToBatchND, Transpose, Strided Slice, Mean, Div, Sub, and Squeeze. A typical way to take advantage of the APIs is through TensorFlow Lite.

Getting the most from your phone -- more easily

We're excited about making your smartphone more intelligent. But it's also important that the technology fades to the back for users. In Android 9, we've evolved Android's UI to be simpler and more approachable -- for developers, these changes help improve the way users find, use, and manage your apps.

New system navigation

Android 9 introduces a new system navigation that we've been working on for more than a year. The new design helps make Android's multitasking more approachable and makes discovering apps much easier. You can swipe up from anywhere to see full-screen previews of recently used apps and simply tap to jump back into one of them.

Display cutout

Now your app can take full advantage of the latest edge-to-edge screens through display cutout support in Android 9. For most apps, supporting display cutout is seamless, with the system managing status bar height to separate your content from the cutout. If you have immersive content, you can use the display cutout APIs to check the position and shape of the cutout and request full-screen layout around it. To help with development and testing, we've added a Developer Option that simulates several cutout shapes on any device.

Apps with immersive content can display content fullscreen on devices with a display cutout.

Notifications and smart reply

Android 9 makes notifications even more useful and more actionable. Messaging apps can take advantage of the new MessagingStyle APIs to show conversations, attach photos and stickers, and even suggest smart replies. You'll soon be able to use ML Kit to generate smart reply suggestions for your app.

MessagingStyle notifications with conversations and smart replies [left], images and stickers [right].

Text Magnifier

In Android 9 we've added a Magnifier widget to improve the user experience of selecting text. The Magnifier widget lets users precisely position the cursor or the text selection handles by viewing zoomed text through a draggable pane. You can attach it to any view that is attached to a window, so you can use it in custom widgets or during custom text-rendering. The Magnifier widget can also provide a zoomed-in version of any view or surface, not just text.

Check out our recent blog post for more about this and other Text features, such as PrecomputedText and line height and baseline text alignment.

Security and privacy for users

Biometric prompt

With a range of biometric sensors in use for authentication, we've made the experience more consistent across sensor types and apps. Android 9 introduces a system-managed dialog to prompt the user for any supported type of biometric authentication. Apps no longer need to build their own dialog--instead they use the BiometricPrompt API to show the standard system dialog. In addition to Fingerprint (including in-display sensors), the API supports Face and Iris authentication.

If your app is drawing its own fingerprint auth dialogs, you should switch to using the BiometricPrompt API as soon as possible. See this post for more information.

Protected Confirmation

Android 9 introduces Android Protected Confirmation, which uses the Trusted Execution Environment (TEE) to guarantee that a given prompt string is shown and confirmed by the user. Only after successful user confirmation will the TEE then sign the prompt string, which the app can verify.

Stronger protection for private keys

We've added StrongBox as a new KeyStore type, providing API support for devices that provide key storage in tamper-resistant hardware with isolated CPU, RAM, and secure flash. You can set whether your keys should be protected by a StrongBox security chip in your KeyGenParameterSpec.

DNS over TLS

Android 9 adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. Users can manage DNS over TLS behavior in a new Private DNS Mode in Network & internet settings. Apps that perform their own DNS queries can use a new API, LinkProperties.isPrivateDnsActive(), to check the DNS mode. More in this post.

HTTPS by default

As part of a larger effort to move all network traffic away from cleartext (unencrypted HTTP) to websites secured with TLS (HTTPS), we're changing the defaults for Network Security Configuration to block all cleartext traffic. You'll now need to make connections over TLS, unless you explicitly opt-in to cleartext for specific domains. See the details here.

Compiler-based security mitigations

In Android 9 we've expanded our use of compiler-level mitigations to harden the platform through run-time detection of dangerous behavior. Control Flow Integrity (CFI) techniques help to prevent code-reuse attacks and arbitrary code execution. In Android 9 we've greatly expanded CFI usage within the media framework and other security-critical components, such as NFC and Bluetooth. We've also introduced CFI kernel support into the Android common kernel when building with LLVM.

We've also expanded our use of Integer overflow sanitizers to mitigate memory-corruption and information-disclosure vulnerabilities. We've prioritized sanitizers in libraries with past vulnerabilities or where complex untrusted input is processed, such as libui, libnl, libmediaplayerservice and others. See this post for details.

Privacy for users

Android 9 safeguards privacy in a number of new ways. The system now restricts access to mic, camera, and all SensorManager sensors from apps that are idle. While your app's UID is idle, the mic reports empty audio and sensors stop reporting events. Cameras used by your app are disconnected and will generate an error if the app tries to use them. In most cases, these restrictions should not introduce new issues for existing apps, but we recommend removing these requests from your apps.

Android 9 also gives the user control over access to the platform's build.serial identifier by putting it behind the READ_PHONE_STATE permission. To access the build.serial identifier, you should use the Build.getSerial() method.

Read more about all of the privacy changes here.

New experiences in camera, audio, and graphics

Multi-camera API and other camera updates

With Android 9 you can now open streams from two or more physical cameras simultaneously on devices that support the multi-camera API. On devices with either dual-front or dual-back cameras, you can create innovative features not possible with just a single camera, such as seamless zoom, bokeh, and stereo vision. The API also lets you call a logical or fused camera stream that automatically switches between two or more cameras.

Other improvements in camera include new Session parameters that help to reduce delays during initial capture, and Surface sharing that lets camera clients handle various use-cases without the need to stop and start camera streaming. We've also added APIs for display-based flash support and access to OIS timestamps for app-level image stabilization and special effects.

HDR VP9 Video and HEIF image compression

Android 9 adds built-in support for HDR VP9 Profile 2, so you can now deliver HDR-enabled movies to your users on HDR-capable devices.

We're excited to add HEIF (heic) image encoding to the platform. HEIF is a popular format for photos that improves compression to save on storage and network data. With platform support on Android 9 devices, it's easy to send and utilize HEIF images from your backend server. Once you've made sure that your app is compatible with this data format for sharing and display, give HEIF a try as an image storage format in your app. You can do a jpeg-to-heic conversion using ImageDecoder or BitmapFactory to obtain a bitmap from jpeg, and you can use HeifWriter in the AndroidX library to write HEIF still images from YUV byte buffer, Surface, or Bitmap.

Enhanced audio with Dynamics Processing

The Dynamics Processing API lets you use a new audio effect to isolate specific frequencies and lower loud or increase soft sounds to enhance the acoustic quality of your app. For example, you can improve the sound of someone who speaks quietly in a loud, distant or otherwise acoustically challenging environment. The API gives you access to a multi-stage, multi-band dynamics processing effect that includes a pre-equalizer, a multi-band compressor, a post-equalizer and a linked limiter.

ImageDecoder for bitmaps and drawables

An ImageDecoder API gives you an easier way to decode images to bitmaps or drawables. You can create a bitmap or drawable from a byte buffer, file, or URI. The API offers several advantages over BitmapFactory, including support for exact scaling, single-step decoding to hardware memory, support for post-processing in decode, and decoding of animated images. You can read more here.

Connectivity and location

Wi-Fi RTT for indoor positioning

Android 9 lets you build indoor positioning features into your apps through platform support for the IEEE 802.11mc Wi-Fi protocol -- also known as Wi-Fi Round-Trip-Time (RTT). On Android 9 devices with hardware support, location permission, and location enabled, your apps can use RTT APIs to measure the distance to nearby Wi-Fi Access Points (APs). The device doesn't need to connect to the APs to use RTT, and to maintain privacy, only the phone is able to determine the distance, not the APs.

Knowing the distance to 3 or more APs, you can calculate the device position with an accuracy of 1 to 2 meters. With this accuracy you can support use-cases like in-building navigation; fine-grained location-based services such as disambiguated voice control (e.g. 'Turn on this light'); and location-based information (e.g. 'Are there special offers for this product?').

Data cost sensitivity in JobScheduler

JobScheduler is Android's central service to help you manage scheduled tasks or work across Doze, App Standby, and Background Limits. In Android 9, JobScheduler handles network-related jobs better for the user, coordinating with network status signals provided separately by carriers. Jobs can now declare their estimated data size, signal prefetching, and specify detailed network requirements—carriers can report networks as being congested or unmetered. JobScheduler then manages work according to the network status. For example, when a network is congested, JobScheduler might defer large network requests. When unmetered, it can run prefetch jobs to improve the user experience, such as prefetching headlines.

Open Mobile API for NFC payments and secure transactions

Android 9 adds an implementation of the GlobalPlatform Open Mobile API to Android. On supported devices, apps can use the OMAPI API to access secure elements (SE) to enable smart-card payments and other secure services. A hardware abstraction layer (HAL) provides the underlying API for enumerating the variety of Secure Elements (eSE, UICC, and others) available.

Performance for apps

ART performance

Android 9 brings performance and efficiency improvements to all apps through the ART runtime. We've expanded ART's use of execution profiles to optimize apps and reduce the in-memory footprint of compiled app code. ART now uses profile information for on-device rewriting of DEX files, with reductions up to 11% across a range of popular apps. We expect these to correlate closely with reductions in system DEX memory usage and faster startup times for your apps.

Optimized for Kotlin

Kotlin is a first-class language on Android, and if you haven't tried it yet, you should! We've made an enduring commitment to Kotlin in Android and continue to expand support including optimizing the performance of Kotlin code. In Android 9, you'll see the first results of this work--we've improved several compiler optimizations, especially those that target loops, to extract better performance. We're also continuing to work in partnership with JetBrains to optimize Kotlin's generated code. You can get all of the latest Kotlin performance improvements just by keeping Android Studio's Kotlin plugin up-to-date.

Today, we are also releasing an update to the Android 9 - API 28 SDK (rev. 6), which contains nullability annotations in some of the most frequently used APIs. We'll provide more details about this in an upcoming post.

Modern Android

As part of Android 9 we are modernizing the foundations of Android and the apps that run on it, as part of our deep, sustained investments in security, performance, and stability.

As we announced last year, Google Play will require all app updates to target Android Oreo (targetSdkVersion 26 or higher) by November 2018. In line with that, if your app targets a platform earlier than Android 4.2 (API level 17), users installing it will see a warning dialog after that day. Here's a checklist of resources for help and support as you migrate -- we're looking forward to seeing your apps getting the most from modern Android.

Get your apps ready for Android 9!

With Android 9 coming to Pixel users starting today, and to other devices in the months ahead, it's important to test your app for compatibility as soon as possible. Just install your current app from Google Play on a device or or emulator running Android 9. As you work through the flows, make sure your app runs and looks great, and that it handles the Android 9 behavior changes properly.

Also watch for uses of non-SDK interfaces in your app. Android 9 restricts access to selected non-SDK interfaces, so you should reduce your reliance on them. See our recent post for details.

After you've made any necessary updates, we recommend publishing to Google Play right away. without changing the app's platform targeting. This lets you ensure a great experience for Android 9 users while you work on enhancing your app with Android 9 APIs and targeting.

Enhance your app with Android 9 features and APIs

When you're ready, dive into Android 9 and build with the new features and APIs in Android 9.

To get started, just download the official API 28 SDK and the latest tools and emulator images into Android Studio 3.1, or use the latest version of Android Studio 3.2. Then update your project's compileSdkVersion and targetSdkVersion to API 28. When you change your targeting, make sure your app supports all of the applicable behavior changes.

As soon as you're ready, publish your APK updates to Google Play. A common strategy is to use Google Play's beta testing feature to get early feedback from a small group of users and then do a staged rollout to production.

Visit the Android 9 site for details and developer documentation. Also check out this video and the Google I/O Android Playlist for more on what's new in Android 9 for developers.

Coming to a device near you

Starting today, an over-the-air update to Android 9 will begin rolling out to Pixel phones. And devices that participated in the Beta program from Sony Mobile, Xiaomi, HMD Global, Oppo, Vivo, OnePlus, and Essential, as well as all qualifying Android One devices, will receive this update by the end of this fall! We are also working with a number of other partners to launch or upgrade devices to Android 9 this year.

As always, the system images for Pixel devices are available here for manual flash and download. If you're looking for the Android 9 source, you'll find it here in the Android Open Source Project repository under the Android 9 branches.

What's next?

Now that we've reached the official release, we're bringing the Developer Preview to a close. We'll soon be closing the Developer Preview issue tracker to new issues, so if you have feedback, feel free to file a new issue against Android 9 in the AOSP issue tracker.

Thanks again to the many developers and early adopters who participated in the Android 9 Developer Preview and public beta. Your contributions have been critical to making the Android 9 platform a great one for developers and consumers.

August 6th 2018, 1:16 pm

Supporting display cutouts on edge-to-edge screens

Android Developers Blog

Posted By Megan Potoski, Product Manager, Android System UI

Smartphones are quickly moving towards smaller bezels and larger aspect ratios. On these devices, display cutouts are a popular way to achieve an edge-to-edge experience while providing space for important sensors on the front of the device. There are currently 16 cutout devices from 11 OEMs already released, including several Android P beta devices, with more on the way.

These striking displays present a great opportunity for you to showcase your app. They also mean it's more important than ever to make sure your app provides a consistently great experience across devices with one or two display cutouts, as well as devices with 18:9 and larger aspect ratios.

Examples of cutout devices: Essential PH-1 (left) and Huawei P20 (right).

Make your app compatible with display cutouts

With many popular and upcoming devices featuring display cutouts, what can you do to make sure your app is cutout-ready?

The good news is, for the most part your app should work as intended even on a cutout device. By default, in portrait mode with no special flags set, the status bar will be resized to be at least as tall as the cutout and your content will display in the window below. In landscape or fullscreen mode, your app window will be letterboxed so that none of your content is displayed in the cutout area.

However, there are a few areas where your app could have issues on cutout devices.

Here are a few guidelines describing what issues to look out for and how to fix them.

Take advantage of the cutout area

Rendering your app content in the cutout area can be a great way to provide a more immersive, edge-to-edge experience for users, especially for content like videos, photos, maps, and games.

An example of an app that has requested layout in the display cutout.

In Android P we added APIs to let you manage how your app uses the display cutout area, as well as to check for the presence of cutouts and get their positions.

You can use layoutInDisplayCutoutMode, a new window layout mode, to control how your content is displayed relative to the cutout. By default, the app's window is allowed to extend into the cutout area if the cutout is fully contained within a system bar. Otherwise, the window is laid out such that it does not overlap with the cutout. You can also set layoutInDisplayCutoutMode to always or never render into the cutout. Using SHORT_EDGES mode to always render into the cutout is a great option if you want to take advantage of the full display and don't mind if a bit of content gets obscured by the cutout.

If you are rendering into the cutout, you can use getDisplayCutout() to retrieve a DisplayCutout that has the cutout's safe insets and bounding box(es). These let you check whether your content overlaps the cutout and reposition things if needed.

<style name="ActivityTheme">
  <item name="android:windowLayoutInDisplayCutoutMode">
    default/shortEdges/never
  </item>
</style>

Attribute for setting layoutInDisplayCutoutMode from the Activity's theme.

For devices running Android 8.1 (API 27), we've also back-ported the layoutInDisplayCutoutMode activity theme attribute so you can control the display of your content in the cutout area. Note that support on devices running Android 8.1 or lower is up to the device manufacturer, however.

To make it easier to manage your cutout implementation across API levels, we've also added DisplayCutoutCompat in the AndroidX library, which is now available through the SDK manager.

For more about the display cutout APIs, take a look at the documentation.

Test your app with cutout

We strongly recommend testing all screens and experiences of your app to make sure that they work well on cutout devices. We recommend using one of the Android P Beta Devices that features a cutout, such as the Essential PH-1.

If you don't have a device, you can also test using a simulated cutout on any device running Android P or in the Android Emulator. This should help you uncover any issues that your app may run into on devices with cutouts, whether they are running Android 8.1 or Android P.

What to expect on devices with display cutouts

Android P introduces official platform support for display cutouts, with APIs that you can use to show your content inside or outside of the cutout. To ensure consistency and app compatibility, we're working with our device manufacturer partners to mandate a few requirements.

First, devices must ensure that their cutouts do not negatively affect apps. There are two key requirements:

Second, devices may only have up to one cutout on each short edge of the device. This means that:

Within these constraints, devices can place cutouts wherever they want.

Special mode

Some devices running Android 8.1 (API level 27) or earlier may optionally support a "special mode" that lets users extend a letterboxed fullscreen or landscape app into the cutout area. Devices would typically offer this mode through a toggle in the navigation bar, which would then bring up a confirmation dialog before extending the screen.

Devices that offer "special mode" allow users to optionally extend apps into the cutout area if supported by the app.

If your app's targetSdkVersion is 27 or higher, you can set the layoutInDisplayCutoutMode activity theme attribute to opt-out of special mode if needed.

Don't forget: larger aspect ratios too!

While you are working on cutout support, it's also a great time to make sure your app works well on devices with 18:9 or larger aspect ratios, especially since these devices are becoming increasingly common and can feature display cutouts.

We highly encourage you to support flexible aspect ratios so that your app can leverage the full display area, no matter what device it's on. You should test your app on different display ratios to make sure it functions properly and looks good.

Here are some guidelines on screens support to keep in mind as you are developing, also refer to our earlier post on larger aspect ratios for tips on optimizing. If your app can't adapt to the aspect ratios on long screens, you can choose to declare a max aspect ratio to request letterboxing on those screens.

Thanks for reading, and we hope this helps you deliver a delightful experience to all your users, whatever display they may have!

July 30th 2018, 4:25 pm

AndroidX Development is Now Even Better

Android Developers Blog

Posted by Aurimas Liutikas, software engineer on AndroidX team

AndroidX (previously known as Android Support Library) started out as a small set of libraries intended to provide backwards compatibility for new Android platform APIs and, as such, its development was strictly tied to the platform. As a result, all work was done in internal Google branches and then pushed to the public Android Open Source Project (AOSP) together with the platform push. With this flow, external contributions were limited to a narrow window of time where the internal and AOSP branches were close in content. On top of that, it was difficult to contribute -- in order to do a full AndroidX build and testing, external developers had to check out >40GB of the full Android platform code.

Today, the scope of AndroidX has expanded dramatically and includes libraries such as AppCompat for easier UI development, Room for database management, and WorkManager for background work. Many of these libraries implement higher-level abstractions and are less tied to new revisions of the Android platform, and all libraries are designed with backwards compatibility in mind from the start. Several libraries, such as RecyclerView and Fragment, are purely AndroidX-side implementations with few ties to the platform.

Starting a little over two years ago, we began a process of unbundling -- moving AndroidX out of Android platform builds into its own separate build. We had to do a great deal of work, including migrating our builds from make to Gradle as well as migrating all of our API tracking tools and documentation generation out of the platform build. With that process completed, we reached a point where a developer can now check out a minimal AndroidX project, open it in Android Studio, and build using the public SDK and public Android Gradle Plugin.

The Android developer community has long expressed a desire to contribute more easily to AndroidX; however, this was always a challenge due to the reasons described above. This changes today: AndroidX development is moving to public AOSP. That means that our primary feature development (except for top-secret integrations with the platform 😀) and bug fixes will be done in the open using the r.android.com Gerrit review tool and changes will be visible in the aosp/androidx-master-dev branch.

We are making this change to give better transparency to developers; it gives developers a chance to see features and bug fixes implemented in real-time. We are also excited about receiving bug fix contributions from the community. We have written up a short guide on how to go about contributing a patch.

In addition to regular development, AOSP will be a place for experimentation and prototyping. You will see new libraries show up in this repository; some of them may be removed before they ship, change dramatically during pre-alpha development, or merge into existing libraries. The general rule is that only the libraries on maven.google.com are officially ready for external developer usage.

Finally, we are just getting started. We apologize for any rough edges that you might have when contributing to AndroidX, and we request your feedback via the public AndroidX tracker if you hit any issues.

July 26th 2018, 12:23 pm

Final preview update, official Android P coming soon!

Android Developers Blog

Posted By Dave Burke, VP of Engineering

Android P is almost here! As we put the finishing touches on the new platform, today we're bringing you Android P Beta 4.

Beta 4 is the last preview milestone before we launch the official Android P platform later this summer. Take this opportunity to test your apps and publish updates, to make sure you offer a great experience for users transitioning to Android P!

What's in this update?

Today's Beta 4 update includes a release candidate build with final system behaviors and the official Android P APIs (API level 28), available since Beta 2. It includes everything you need to wrap up your testing in time for the upcoming official Android P release.

Get your apps ready for Android P

With the consumer launch coming soon, it's important to test your app for compatibility with Android P. Just install your current app from Google Play on an Android P Beta device or emulator. As you work through the flows, make sure your app runs and looks great, and that it handles the Android P behavior changes properly.

Also watch for uses of non-SDK interfaces in your app. Android P restricts access to selected non-SDK interfaces, so you should reduce your reliance on them. See our recent post for details..

After you've made any necessary updates, we recommend publishing to Google Play right away without changing the app's platform targeting. This lets you ensure a great experience for Android P users while you work on enhancing your app with Android P APIs and targeting.

Enhance your app with Android P features and APIs

When you're ready, dive into Android P and learn about the new features and APIs that you can use in your apps, like multi-camera support, display cutout, enhanced notifications, ImageDecoder, TextClassifier, and others.

To build with the new APIs, just download the official API 28 SDK and tools into Android Studio 3.1, or use the latest version of Android Studio 3.2. Then update your project's compileSdkVersion and targetSdkVersion to API 28. When you change your targeting, make sure your app supports all of the applicable behavior changes.

As soon as you're ready, publish your APK updates that are compiled against, or optionally targeting, API 28. A common strategy is to use Google Play's beta testing feature to get early feedback from a small group of users and then do a staged rollout to production.

Visit the Developer Preview site for details and documentation. Also check out this video and the Google I/O Android playlist for more on what's new in Android P for developers.

How do I get Beta 4?

It's easy - you can get Android P Beta 4 on Pixel devices by enrolling here. If you're already enrolled in our Android Beta program, you'll automatically get the Beta 4 update soon. As always, downloadable system images for Pixel devices are also available. Partners who are participating in the Android P Beta program will also be updating their devices to Beta 4 over the coming weeks.

What's next?

Stay tuned for the official Android P launch coming soon! You can continue to share your feedback or requests in the meantime, and feel free to use our hotlists for platform issues, app compatibility issues, and third-party SDK issues.

Thanks for your feedback so far, and thank you to everyone who participated in our recent Reddit AMA on r/androiddev!

July 25th 2018, 1:23 pm
Get it on Google Play تحميل تطبيق نبأ للآندرويد مجانا