Android Developers Blog
android-developers.googleblog.com
News and insights on the Android platform, developer tools, and events.
Articles39
Posted by Meghan Mehta, Android Developer Relations Engineer Today, the Jetpack Compose April ‘26 release is stable. This release contains version 1.11 of core Compose modules (see the full BOM mapping), shared element debug tools, trackpad events, and more. We also have a few experimental APIs that we’d love you to try out and give us feedback on. To use today’s release, upgrade your Compose BOM version to: implementation(platform("androidx.compose:compose-bom:2026.04.01")) Changes in Compose 1.11.0 Coroutine execution in tests UnconfinedTestDispatcher, which executed coroutines immediately, the v2 APIs use the StandardTestDispatcher. This means that when a coroutine is launched in your tests, it is now queued and does not execute until the virtual clock is advanced. This better mimics production conditions, effectively flushing out race conditions and making your test suite significantly more robust and less flaky. To ensure your tests align with standard coroutine behavior and to avoid future compatibility issues, we strongly recommend migrating your test suite. Check out our comprehensive migration guide for API mappings and common fixes. Shared element improvements and animation tooling We’ve also added some handy visual debugging tools for shared elements and Modifier.animatedBounds. You can now see exactly what’s happening under the hood—like target bounds, animation trajectories, and how many matches are found—making it much easier to spot why a transition might not be behaving as expected. To use the new tooling, simply surround your SharedTransitionLayout with the LookaheadAnimationVisualDebugging composable. LookaheadAnimationVisualDebugging( overlayColor = Color(0x4AE91E63), isEnabled = true, multipleMatchesColor = Color.Green, isShowKeylabelEnabled = false, unmatchedElementColor = Color.Red, ) { SharedTransitionLayout { CompositionLocalProvider( LocalSharedTransitionScope provides this, ) { // your content } } } Trackpad events PointerType.Mouse events, aligning mouse and trackpad behavior to better match user expectations. Previously, these trackpad events were interpreted as fake touchscreen fingers of PointerType.Touch, which led to confusing user experiences. For example, clicking and dragging with a trackpad would scroll instead of selecting. By changing the pointer type these events have in the latest release of Compose, clicking and dragging with a trackpad will no longer scroll. We also added support for more complicated trackpad gestures as recognized by the platform since API 34, including two finger swipes and pinches. These gestures are automatically recognized by components like Modifier.scrollable and Modifier.transformable to have better behavior with trackpads. These changes improve behavior for trackpads across built-in components, with redundant touch slop removed, a more intuitive drag-and-drop starting gesture, double-click and triple-click selection in text fields, and desktop-styled context menus in text fields. To test trackpad behavior, there are new testing APIs with performTrackpadInput, which allow validating the behavior of your apps when being used with a trackpad. If you have custom gesture detectors, validate behavior across input types, including touchscreens, mice, trackpads, and styluses, and ensure support for mouse scroll wheels and trackpad gestures. Before After Composition host defaults (Compose runtime) HostDefaultProvider, LocalHostDefaultProvider, HostDefaultKey, and ViewTreeHostDefaultKey to supply host-level services directly through compose-runtime. This removes the need for libraries to depend on compose-ui for lookups, better supporting Kotlin Multiplatform. To link these values to the composition tree, library authors can use compositionLocalWithHostDefaultOf to create a CompositionLocal that resolves defaults from the host. Preview wrappers PreviewWrapperProvider interface and applying the new @PreviewWrapper annotation, you can easily inject custom logic, such as applying a specific Theme. The annotation can be applied to a function annotated with @Composable and @Preview or @MultiPreview, offering a generic, easy-to-use solution that works across preview features and significantly reduces repetitive code. class ThemeWrapper: PreviewWrapper { @Composable override fun Wrap(content: @Composable (() -> Unit)) { JetsnackTheme { content() } } } @PreviewWrapperProvider(ThemeWrapper::class) @Preview @Composable private fun ButtonPreview() { // JetsnackTheme in effect Button(onClick = {}) { Text(text = "Demo") } } Deprecations and removals As announced in the Compose 1.10 blog post, we’re deprecating Modifier.onFirstVisible(). Its name often led to misconceptions, particularly within lazy layouts, where it would trigger multiple times during scrolling. We recommend migrating to Modifier.onVisibilityChanged(), which allows for more precise manual tracking of visibility states tailored to your specific use case requirements. The ComposeFoundationFlags.isTextFieldDpadNavigationEnabled flag was removed because D-pad navigation for TextFields is now always enabled by default. The new behavior ensures that the D-pad events from a gamepad or a TV remote first move the cursor in the given direction. The focus can move to another element only when the cursor reaches the end of the text. Upcoming APIs compileSdk will be upgraded to compileSdk 37, with AGP 9 and all apps and libraries that depend on Compose inheriting this requirement. We recommend keeping up to date with the latest released versions, as Compose aims to promptly adopt new compileSdks to provide access to the latest Android features. Be sure to check out the documentation here for more information on which version of AGP is supported for different API levels. In Compose 1.11.0, the following APIs are introduced as @Experimental, and we look forward to hearing your feedback as you explore them in your apps. Note that @Experimental APIs are provided for early evaluation and feedback and may undergo significant changes or removal in future releases. Styles (Experimental) styling. The Style API is a new paradigm for customizing visual elements of components, which has traditionally been performed with modifiers. It is designed to unlock deeper, easier customization by exposing a standard set of styleable properties with simple state-based styling and animated transitions. With this new API, we’re already seeing promising performance benefits. We plan to adopt Styles in Material components once the Style API stabilizes. @Composable fun LoginButton(modifier: Modifier = Modifier) { Button( onClick = { // Login logic }, modifier = modifier, style = { background( Brush.linearGradient( listOf(lightPurple, lightBlue) ) ) width(75.dp) height(50.dp) textAlign(TextAlign.Center) externalPadding(16.dp) pressed { background( Brush.linearGradient( listOf(Color.Magenta, Color.Red) ) ) } } ){ Text( text = "Login", ) } } Check out the documentation and file any bugs here. MediaQuery (Experimental) mediaQuery API provides a declarative and performant way to adapt your UI to its environment. It abstracts complex information retrieval into simple conditions within a UiMediaScope, ensuring recomposition only happens when needed. With support for a wide range of environmental signals—from device capabilities like keyboard types and pointer precision, to contextual states like window size and posture—you can build deeply responsive experiences. Performance is baked in with derivedMediaQuery to handle high-frequency updates, while the ability to override scopes makes testing and previews seamless across hardware configurations. Previously, to get access to certain device properties — like if a device was in tabletop mode — you’d need to write a lot of boilerplate to do so: @Composable fun isTabletopPosture( context: Context = LocalContext.current ): Boolean { val windowLayoutInfo by WindowInfoTracker .getOrCreate(context) .windowLayoutInfo(context) .collectAsStateWithLifecycle(null) return windowLayoutInfo.displayFeatures.any { displayFeature -> displayFeature is FoldingFeature && displayFeature.state == FoldingFeature.State.HALF_OPENED && displayFeature.orientation == FoldingFeature.Orientation.HORIZONTAL } } @Composable fun VideoPlayer() { if(isTabletopPosture()) { TabletopLayout() } else { FlatLayout() } } UIMediaQuery, you can add the mediaQuery syntax to query device properties, such as if a device is in tabletop mode: @OptIn(ExperimentalMediaQueryApi::class) @Composable fun VideoPlayer() { if (mediaQuery { windowPosture == UiMediaScope.Posture.Tabletop }) { TabletopLayout() } else { FlatLayout() } } documentation and file any bugs here. Grid (Experimental) Grid is a powerful new API for building complex, two-dimensional layouts in Jetpack Compose. While Row and Column are great for linear designs, Grid gives you the structural control needed for screen-level architecture and intricate components without the overhead of a scrollable list. Grid allows you to define your layout using tracks, gaps, and cells, offering familiar sizing options like Dp, percentages, intrinsic content sizes, and flexible "Fr" units. @OptIn(ExperimentalGridApi::class) @Composable fun GridExample() { Grid( config = { repeat(4) { column(0.25f) } repeat(2) { row(0.5f) } gap(16.dp) } ) { Card1(modifier = Modifier.gridItem(rowSpan = 2) Card2(modifier = Modifier.gridItem(colmnSpan = 3) Card3(modifier = Modifier.gridItem(columnSpan = 2) Card4() } } Check out the documentation and file any bugs here. FlexBox (Experimental) FlexBox is a layout container designed for high performance, adaptive UIs. It manages item sizing and space distribution based on available container dimensions. It handles complex tasks like wrapping (wrap) and multi-axis alignment of items (justifyContent, alignItems, alignContent). It allows items to grow (grow) or shrink (shrink) to fill the container. @OptIn(ExperimentalFlexBoxApi::class) fun FlexBoxWrapping(){ FlexBox( config = { wrap(FlexWrap.Wrap) gap(8.dp) } ) { RedRoundedBox() BlueRoundedBox() GreenRoundedBox(modifier = Modifier.width(350.dp).flex { grow(1.0f) }) OrangeRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.7f) }) PinkRoundedBox(modifier = Modifier.width(200.dp).flex { grow(0.3f) }) } } Check out the documentation and file any bugs here. New SlotTable implementation (Experimental) SlotTable, which is disabled by default in this release. SlotTable is the internal data structure that the Compose runtime uses to track the state of your composition hierarchy, track invalidations/recompositions, store remembered values, and track all metadata of the composition at runtime. This new implementation is designed to improve performance, primarily around random edits. To try the new SlotTable, enable ComposeRuntimeFlags.isLinkBufferComposerEnabled. Start coding today! migrate to Jetpack Compose. As always, we value your feedback and feature requests (especially on @Experimental features that are still baking) — please file them here. Happy composing!
Posted by Niharika Arora, Senior Developer Relations Engineer and Jean-Pierre Pralle, Product Manager, Credential Manager In the modern digital landscape, the first encounter a user has with an app is often the most critical. Yet, for decades, this initial interaction has been hindered by the friction of traditional verification methods. Today, we're excited to announce a new verified email credential issued by Google, which developers can now retrieve directly from Android’s Credential Manager Digital Credential API. The Problem: Authentication Friction in the Modern Era The "current era" of authentication is defined by a trade-off between security and convenience. To ensure that a user owns the email address they provide, you typically rely on One-Time Passwords (OTPs) or "magic links" sent by email or SMS. While effective, these traditional steps introduce significant hurdles: Context switching: Users must leave the app, open their inbox or messaging app, find the code, and return, a process where many potential users simply drop off. Delivery issues: While Emails are free, they can be delayed or diverted to spam folders. Onboarding friction: Every extra second spent in the "verification loop" is a second where a user might lose interest, directly impacting conversion rates. The Solution: Seamless, Verified Email Google now issues a cryptographically verified email credential directly to Android devices. This verified email credential is delivered through the Credential Manager API, which is Android's implementation of the W3C's Digital Credential API standard. For users, this completely removes the need to manually verify their email through external channels. For developers, the API securely delivers these verified user claims for any scenario whether you are building an account creation flow, a recovery process, or a high-risk step-up authentication. While this specific verified email address is sourced securely from the user's Google Account on their device, the underlying Digital Credentials API is issuer-agnostic. This fosters an open ecosystem, allowing any holder of a digital credential with an email claim to offer that verification to your app. User Experience The beauty of this API lies in its simplicity for the end user. Instead of hunting for OTP codes, the experience is integrated directly into the Android OS: Initiation: The process begins when a user focuses on an email input field or taps a "Sign up" or "Recover account" button. You can also initiate the process on page load. Transparency: A native Android bottom sheet appears, clearly detailing exactly what data is being requested (for example, user’s verified email address). One-tap consent: The user simply taps "Agree and continue" to share the data. Immediate progress: Once consent is given, the app receives the data instantly. For sign-up or account recovery flows, you can then seamlessly transition the user into passkey creation, ensuring: Users do not have to enter any user information manually, as compared to the traditional username/password registration. Their next login is even faster and more secure. Use case 1. Sign up Accelerate onboarding by fetching a verified email the moment the user taps "Sign up". We strongly recommend you pair the verified email retrieval with passkey creation, also part of the Credential Manager API: Note: You can also fetch other unverified fields such as a user’s given name, family name, name, profile picture and the hosted domain connected with the verified email. Use case 2. Account recovery Eliminate the frustration of users hunting for recovery codes in their spam folders by allowing them to recover their account using the verified email securely stored on their device. Use case 3. Re-authentication for sensitive actions Protect sensitive user actions, such as changing settings or updating profile details, by requiring a quick re-authentication step. Instead of an OTP, you can provide a low-friction verification using the device's verified email. Important Considerations As you design your authentication architecture around the Digital Credentials API, keep the following details in mind: Account support: For the specific email credential issued by Google, only regular consumer Google Accounts are supported (Workspace and supervised accounts are currently not supported). Keep in mind that the Credential Manager API itself is issuer-agnostic, meaning other identity providers can issue credentials with their own account support policies. Other user data: Beyond email, you can request the user's given name, family name, full name, and profile picture. However, note that only the email is verified by Google. Auto verify your @gmail accounts: The API provides verified emails for all consumer Google Accounts. We recommend auto-verifying @gmail.com users and routing custom domains to your existing verification flow - for example, an OTP flow. This ensures you maintain long-term access for external domains not directly managed by Google. Complementary to Sign in with Google: While both the new verified email credential & Sign in with Google API provides a verified email, the choice depends on the intended user experience: Use Sign in with Google when your users want to create a federated login session. Use Verified Email when your users want to sign in traditionally with a username/password or passkey but want to auto-verify the email address without the manual chore of an OTP. Conclusion and Next steps By integrating the new verified email via Credential Manager API, you can drastically reduce onboarding friction and provide users with a more streamlined, secure authentication journey. This represents a shift toward a future where "verification" is no longer a manual chore for the user, but a seamless, integrated part of the native mobile experience. Ready to see how this fits into your own app? To get started, update your project to the latest Credential Manager API and explore our Integration Guide. We encourage you to explore how this streamlined verification can simplify your critical user journeys from optimizing account creation, to enhancing re-authentication flows.
Posted by Matt Dyor, Senior Product Manager Android Studio Panda 4 is now stable and ready for you to use in production. This release brings Planning Mode, Next Edit Prediction, and more, making it easier than ever to build high-quality Android apps. Here’s a deep dive into what’s new: Planning Mode Before the Agent starts working on complex tasks for you, it would be helpful if it could come up with a detailed plan. Jumping straight into a large coding project without a design often leads to technical debt or logic errors; the same is true for AI. That’s why we’re adding Planning Mode. In this mode, the Agent comes up with a detailed project plan before executing tasks. Instead of a single pass where the model directly predicts the next token of code, Planning Mode facilitates a multi-stage reasoning process — giving the agent additional space to evaluate its own proposed logic for potential issues before presenting it to you. This is especially useful for complex and long-running tasks which demand a high degree of architectural precision. To use Planning Mode, switch your conversation mode to "Planning" in the agent input box and enter your prompt. to “Planning” and enter your prompt. Switch to Planning Mode In Planning Mode, the agent examines your request and may generate an implementation plan for large or complex tasks. You have the opportunity to fix mistakes or clarify which approaches to use—all before the agent has spent any time or tokens heading in the wrong direction. Open Implementation Plan Add Comments to Implementation Plan After adding comments, simply hit “Submit Comments” and the agent will use your feedback to revise the implementation plan. To stay on track during execution—which is particularly important with larger changes—the agent organizes its work and generates a "Task List" artifact. You can sit back and watch as the agent methodically completes all of the tasks. Task List Artifact After the work is done, the agent produces a “Walkthrough” artifact, giving you a clear summary of exactly what has been changed—making it easy to review the agent's changes. Build with more confidence and control using Planning Mode in the latest release of Android Studio. Add Comments to Implementation Plan Next Edit Prediction Classic autocomplete is great for finishing your sentences, but coding is rarely a linear path. Often, a change in one place requires a secondary change elsewhere—like adding a new parameter to a function and then needing to update its invocations, or a UI preview update when a Composable is changed. Traditionally, this meant breaking your focus to hunt down the related lines of code that need attention. Next Edit Prediction (NEP) evolves code completion by anticipating your next move, even when it’s not at your current cursor position. By analyzing your recent edits, Android Studio recognizes the logical pattern of your workflow. If you modify a data class or update a constructor, NEP can suggest the next relevant edit—perhaps in a distant function—allowing you to jump straight to the fix. Instead of manually navigating back and forth, you can accept these multi-location suggestions with a single keystroke. This keeps you deep in the "flow state," reducing the cognitive load of routine updates and letting you focus on the complex logic that truly matters to your application. Experience a more intuitive, non-linear way to code in the latest version of Android Studio. NEP Updating Function Name NEP Adding New Line Gemini API Starter Template Adding powerful AI features to your app just got easier, introducing the Gemini API Starter Template for Android Studio! Integrating generative AI into your Android application used to mean managing complex backend plumbing and worrying about API key security. With the new Gemini API Starter template in Android Studio, developers can now jump straight into building features rather than spending time configuring infrastructure. Key benefits include: Zero API key management: Stop worrying about provisioning or rotating keys. By leveraging Firebase AI Logic, the template eliminates the need to embed sensitive credentials in your client-side code. Automated Firebase integration: The backend plumbing is handled for you. The template automatically connects your project to Firebase services, ensuring a secure bridge between your app and Google’s Gemini models. Built to scale: This isn't just for prototypes. The production-ready architecture allows you to scale from a local test to a global user base without redesigning your foundation. Multimodal processing: Supports text, image, video, and audio inputs. You can build features like real-time image analysis, video summarization, and audio transcription. Get Started Open Android Studio. Navigate to File > New > New Project. Select the Gemini API Starter template from the gallery. Gemini API Starter new project template Agent Web Search When you’re deep in development, the right answer is often just a search away—but leaving your IDE to find it can snap you out of your flow. Whether you need the exact version number for a dependency or the latest API changes for a third-party library, the Agent Web Search tool is here to help without you ever having to leave Android Studio. While Android Studio’s agent already leverages the Android Knowledge Base for official documentation, modern Android development relies on a vast ecosystem of external libraries. Agent Web Search expands Gemini's reach, allowing it to query Google directly to fetch current reference material from across the web. From checking the latest setup guides for Coil to finding advanced configuration tips for Koin or Moshi, the agent can now pull in the most up-to-date information in real time. The Agent Web Search tool is designed to be helpful but unobtrusive; it will automatically trigger a web search when it identifies a gap in its local knowledge. You can also take the wheel by asking it to find something specific—simply include "search the web for..." in your prompt. By integrating live web results directly into your workspace, Agent Web Search ensures you’re always building with the most current data available, speeding up your workflow and keeping your project on the cutting edge. Agent Web Search Tool Invocation Android Studio Panda Releases Go from prompt to working prototype with Android Studio Panda 2 and Increase Guidance and Control over Agent Mode with Android Studio Panda 3. Android Studio Panda 2 AI-powered New Project flow: Allows you to build a working app prototype with a single prompt. The agent manages initial setup, navigation configuration, and proper dependencies, and features an autonomous generation loop to handle build errors and deploy to an emulator. Version Upgrade Assistant: Automates dependency management and updates, iteratively attempting builds and resolving conflicts until a stable configuration is found. Android Studio Panda 3 Agent skills: Specialized, user-defined instructions (stored in a .skills directory) that teach the AI agent project-specific capabilities, coding standards, or library usage. Agent permissions: Provides fine-grained control over what agents can do, with features like "Always Allow" rules for trusted operations. For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent. Empty Car App Library App template: Simplifies building driving-optimized apps for Android Auto and Android Automotive OS by handling required boilerplate code. Get started Dive in and accelerate your development. Download Android Studio Panda 4 and start exploring these powerful new agentic features today. As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding! Android Studio, androidstudio, Android Studio Panda, featured, Gemini in Android Studio, AI-powered new project flow, version upgrade assistant
Posted by Thomas Ezan, Senior Developer Relations Engineer If you are an Android developer looking to implement innovative AI features into your app, we recently launched powerful new updates: Hybrid inference, a new API for Firebase AI Logic to leverage both on-device and Cloud inference, and support for new Gemini models including the latest Nano Banana models for image generation. Let’s jump in! Experiment with hybrid inference With the new Firebase API for hybrid inference, we implemented a simple rule-based routing approach as an initial solution to let you use both on-device and cloud inference via a unified API. We are planning on providing more sophisticated routing capabilities in the future. It allows your app to dynamically switch between Gemini Nano running locally on the device and cloud-hosted Gemini models. The on-device execution uses ML Kit's Prompt API. The cloud inference supports all the Gemini models from Firebase AI Logic in both Vertex AI and the Developer API. To use it, add the firebase-ai-ondevice dependencies to your app along with Firebase AI Logic: dependencies { [...] implementation("com.google.firebase:firebase-ai:17.11.0") implementation("com.google.firebase:firebase-ai-ondevice:16.0.0-beta01") } During initialization, you create a GenerativeModel instance and configure it with specific inference modes, such as PREFER_ON_DEVICE (falls back to cloud if Gemini Nano is not available on the device) or PREFER_IN_CLOUD (falls back to on-device inference if offline): val model = Firebase.ai(backend = GenerativeBackend.googleAI()) .generativeModel( modelName = "gemini-3.1-flash-lite", onDeviceConfig = OnDeviceConfig( mode = InferenceMode.PREFER_ON_DEVICE ) ) val response = model.generateContent(prompt) The Firebase API for hybrid inference for Android is still experimental, and we encourage you to try it in your app, especially if you are already using Firebase AI Logic. Currently, on-device models are specialized for single-turn text generation based on text or single Bitmap image inputs. Review the limitations for more details. We just published a new sample in the AI Sample Catalog leveraging the Firebase API for hybrid; it demonstrates how the Firebase API for hybrid inference can be used to generate a review based on a few selected topics and then translating it into various languages. Check out the code to see it in action! The new hybrid inference sample in action Try our new models As part of the new Gemini models, we've released two models particularly helpful to Android developers and easy to integrate in your application via the Firebase AI Logic SDK. Nano Banana Last year we released Nano Banana, a state-of-the-art image generation model. And a few weeks ago, we released a couple of new Nano Banana models. Nano Banana Pro (Gemini 3 Pro Image) is designed for professional asset production and can render high-fidelity text, even in a specific font or simulating different types of handwriting. Nano Banana 2 (Gemini 3.1 Flash Image) is the high-efficiency counterpart to Nano Banana Pro. It's optimized for speed and high-volume use cases. It can be used for a broad range of use cases (infographics, virtual stickers, contextual illustrations, etc.). The new Nano Banana models leverage real-world knowledge and deep reasoning capabilities to generate precise and detailed images. We updated our Magic Selfie sample (use image generation to change the background of your selfie!) to use Nano Banana 2. The background segmentation is now handled directly with the image generation model which makes the implementation easier and lets Nano Banana 2 improved image generation capabilities shine. See it in action here. The updated Magic Selfie sample uses Nano Banana 2 to update a selfie background You can use it via Firebase AI Logic SDK. Read more about it in the Android documentation. Gemini 3.1 Flash-Lite We also released Gemini 3.1 Flash-Lite, a new version of the Gemini Flash-Lite family. The Gemini Flash-Lite models have been particularly favored by Android developers for its good quality/latency ratio and low inference cost. It’s been used by Android developers for various use-cases such as in-app messaging translation or generating a recipe from a picture of a dish. Gemini 3.1 Flash-Lite, currently in preview, will enable more advanced use cases with latency comparable to Gemini 2.5 Flash-Lite. To learn more about this model, review the Firebase documentation. Conclusion It’s a great time to explore the new Hybrid sample in our catalog to see these capabilities in action and understand the benefits of routing between on-device and cloud inference. We also encourage you to check out our documentation to test the new Gemini models.
Posted by Dan Galpin, Developer Relations Engineer Android 17 has reached beta 4, the last scheduled beta of this release cycle, a critical milestone for app compatibility and platform stability. Whether you're fine-tuning your app's user experience, ensuring smooth edge-to-edge rendering, or leveraging the newest APIs, Beta 4 provides the near-final environment you need to be testing with. Get your apps, libraries, tools, and game engines ready! If you develop an Android SDK, library, tool, or game engine, it's critical to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17. Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 4. Work through all your app's flows and look for functional or UI issues. Each release of Android contains platform changes that improve privacy, security, and overall user experience; review the app impacting behavior changes for apps running on and targeting Android 17 to focus your testing, including the following: Resizability on large screens: Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens. Dynamic code loading: If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError. Enable CT by default: Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.) Local network protections: Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK permission for broad, persistent access. Background audio hardening: Starting in Android 17, the audio framework enforces restrictions on background audio interactions including audio playback, audio focus requests, and volume change APIs. Based on your feedback, we’ve made some changes since beta 2, including targetSDK gating while-in-use FGS enforcement and exempting alarm audio. Full details available in updated guidance. App memory limits Android is introducing app memory limits based on the device's total RAM to create a more stable and deterministic environment for your applications and Android users. In Android 17, limits are set conservatively to establish system baselines, targeting extreme memory leaks and other outliers before they trigger system-wide instability resulting in UI stuttering, higher battery drain, and apps being killed. While we anticipate minimal impact on the vast majority of app sessions, we recommend the following memory best practices, including establishing a baseline for memory. In the current implementation, getDescription in ApplicationExitInfo will contain the string "MemoryLimiter" if your app was impacted. You can also use trigger-based profiling with TRIGGER_TYPE_ANOMALY to get heap dumps that are collected when the memory limit is hit. The LeakCanary task in the Android Studio Profiler To help you find memory leaks, Android Studio Panda adds LeakCanary integration directly in the Android Studio Profiler as a dedicated task, contextualized within the IDE and fully integrated with your source code. A lighter memory footprint translates directly to smoother performance, longer battery life, and a premium experience across all form factors. Let’s build a faster, more reliable future for the Android ecosystem together! Profiling triggers for app anomalies Android introduces an on-device anomaly detection service that monitors for resource-intensive behaviors and potential compatibility regressions. Integrated with ProfilingManager, this service allows your app to receive profiling artifacts triggered by specific system-detected events. Use the TRIGGER_TYPE_ANOMALY trigger to detect system performance issues such as excessive binder calls and excessive memory usage. When an app breaches OS-defined memory limits, the anomaly trigger allows developers to receive app-specific heap dumps to help identify and fix memory issues. Additionally, for excessive binder spam, the anomaly trigger provides a stack sampling profile on binder transactions. This API callback occurs prior to any system imposed enforcements. For example, it can help developers collect debug data before the app is terminated by the system due exceeding memory limits. To understand how to use the trigger check out our documentation on trigger based profiling. val profilingManager = applicationContext.getSystemService(ProfilingManager::class.java) val triggers = ArrayList<ProfilingTrigger>() triggers.add(ProfilingTrigger.Builder( ProfilingTrigger.TRIGGER_TYPE_ANOMALY)) val mainExecutor: Executor = Executors.newSingleThreadExecutor() val resultCallback = Consumer<ProfilingResult> { profilingResult -> if (profilingResult.errorCode != ProfilingResult.ERROR_NONE) { // upload profile result to server for further analysis setupProfileUploadWorker(profilingResult.resultFilePath) } } profilingManager.registerForAllProfilingResults(mainExecutor, resultCallback) profilingManager.addProfilingTriggers(triggers) Post-Quantum Cryptography (PQC) in Android Keystore Android Keystore added support for the NIST-standardized ML-DSA (Module-Lattice-Based Digital Signature Algorithm). On supported devices, you can generate ML-DSA keys and use them to produce quantum-safe signatures, entirely in the device’s secure hardware. Android Keystore exposes the ML-DSA-65 and ML-DSA-87 algorithm variants through the standard Java Cryptographic Architecture APIs: KeyPairGenerator, KeyFactory, and Signature. For further details, see our developer documentation. KeyPairGenerator generator = KeyPairGenerator.getInstance( “ML-DSA-65”, "AndroidKeyStore"); generator.initialize( new KeyGenParameterSpec.Builder( “my-key-alias”, KeyProperties.PURPOSE_SIGN | KeyProperties.PURPOSE_VERIFY) .build()); KeyPair keyPair = generator.generateKeyPair(); Get started with Android 17 You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 4. Continue to report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do: Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it. We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 17 developer site. Join the conversation Your feedback remains our most valuable asset. Whether you’re an early adopter on the Canary channel or an app developer testing on Beta 4, consider joining our communities and filing feedback. We’re listening.
Posted by Adarsh Fernando, Group Product Manager and Esteban de la Canal, Senior Staff Software Engineer As Android developers, you have many choices when it comes to the agents, tools, and LLMs you use for app development. Whether you are using Gemini in Android Studio, Gemini CLI, Antigravity, or third-party agents like Claude Code or Codex, our mission is to ensure that high-quality Android development is possible everywhere. Today, we are introducing a new suite of Android tools and resources for agentic workflows — Android CLI with Android skills and the Android Knowledge Base. This collection of tools is designed to eliminate the guesswork of core Android development workflows when you direct an agent’s work outside of Android Studio, making your agents more efficient, effective, and capable of following the latest recommended patterns and best practices. Whether you are just starting your development journey on Android, are a seasoned Android developer, or managing apps across mobile and web platforms, building your apps with the latest guidance, tools, and AI-assistance is easier than ever. No matter which environment you begin with these resources, you can always transition your development experience to Android Studio—where the state-of-the-art tools and agents for Android development are available to help your app experience truly shine. (Re)Introducing the Android CLI Your agents perform best when they have a lightweight, programmatic interface to interact with the Android SDK and development environment. So, at the heart of this new workflow is a revitalized Android CLI. The new Android CLI serves as the primary interface for Android development from the terminal, featuring commands for environment setup, project creation, and device management—with more modern capabilities and easy updatability in mind. The create command makes an Android app project in seconds. In our internal experiments, Android CLI improved project and environment setup by reducing LLM token usage by more than 70%, and tasks were completed 3X faster than when agents attempted to navigate these tasks using only the standard toolsets. Key capabilities available to you include: SDK management: Use android sdk install to download only the specific components needed, ensuring a lean development environment. Snappy project creation: The android create command generates new projects from official templates, ensuring the recommended architecture and best practices are applied from the very first line of code. Rapid device creation and deployment: Create and manage virtual devices with android emulator and deploy apps using android run, eliminating the guesswork involved in manual build and deploy cycles. Updatability: Run android update to ensure that you have the latest capabilities available. Android CLI can create a device, run your app on it, and make it easier for agents to navigate UI. While Android CLI will empower your agentic development flows, it’s also been designed to streamline CI, maintenance, and any other scripted automation for the increasingly distributed nature of Android development. Download and try out the Android CLI today! Grounding LLMs with official Android Skills Traditional documentation can be descriptive, conceptual, and high-level. While perfect for learning, LLMs often require precise, actionable instructions to execute complex workflows without using outdated patterns and libraries. To bridge this gap, we are launching the Android skills GitHub repository. Skills are modular, markdown-based (SKILL.md) instruction sets that provide a technical specification for a task and are designed to trigger automatically when your prompt matches the skill's metadata, saving you the hassle of manually attaching documentation to every prompt. Android skills cover some of the most common workflows that some Android developers and LLMs may struggle with—they help models better understand and execute specific patterns that follow our best practices and guidance on Android development. In our initial release, the repository includes skills like: Navigation 3 setup and migration. Implementing edge-to-edge support. AGP 9 and XML-to-Compose migrations. R8 config analysis, and more! If you’re using Android CLI, you can browse and set up your agent workflow with our growing collection of skills using the android skills command. These skills can also live alongside any other skills you create, or third-party skills created by the Android developer community. Learn more about getting started with Android skills. Install Android skills via the Android CLI to make your agent more effective and efficient. The latest guidance via the Android Knowledge Base The third component we are launching today is the Android Knowledge Base. Accessible through the android docs command and already available in the latest version of Android Studio, this specialized data source enables agents to search and fetch the latest authoritative developer guidelines to use as relevant context. The Android Knowledge Base ensures agents have the latest context, guidance, and best practices for Android. By accessing the frequently updated knowledge base, agents can ground their responses in the most recent information from Android developer docs, Firebase, Google Developers, and Kotlin docs. This ensures that even if an LLM's training cutoff is a year old, it can still provide guidance on the latest frameworks and patterns we recommend today. Android Studio: The ultimate destination for premium apps In addition to empowering developers and agents to handle project setup and boilerplate code, we’ve also designed these new tools and resources to make it easier to transition to Android Studio. That means you can start a prototype quickly with an agent using Android CLI and then open the project in Android Studio to fine-tune your UI with visual tools for code editing, UI design, deep debugging, and advanced profiling that scale with the growing capabilities of your app. And when it is time to build a high-quality app for large-scale publication across various device types, our agent in Android Studio is here to help, while leveraging the latest development best practices and libraries. Beyond the powerful Agent and Planning Modes for active development, we have introduced an AI-powered New Project flow, which provides an entry point to rapidly prototyping your next great idea for Android. These built-in agents make it simple to extend your app ideas across phones, foldables, tablets, Wear OS, Android Auto, and Android TV. Equipped with full context of your project’s source code and a comprehensive suite of debugging, profiling, and emulation tools, you have an end-to-end, AI-accelerated toolkit at your disposal. Get started today Android CLI is available in preview today, along with a growing set of Android skills and knowledge for agents. To get started, head over to d.android.com/tools/agents to download Android CLI.
Posted by Bennet Manuel, Group Product Manager, App & Ecosystem Trust We strive to make Google Play the safest and most trusted experience possible. Today, we’re announcing a new set of policy updates and an account transfer feature to boost user privacy and protect your business from fraud. By providing better features for users and easy-to-integrate tools for you, we’re making it simpler to build safer apps so you can focus on creating great experiences. We’re also expanding our features to help you manage new contact and location policy changes, so you have a smoother, more predictable app review experience. By October, Play policy insights in Android Studio can help you proactively identify if your app should use these new features and guide you on the exact steps to take. Additionally, new pre-review checks in the Play Console will be available starting October 27 to flag potential contacts or location permissions policy issues so you can fix them before you submit your app for review. Here is what is new and how you can prepare. Contact Picker: A privacy-friendly way to access contacts Android is introducing the Android Contact Picker as the new standard for accessing contact information (e.g., for invites, sharing, or one-time lookups). This picker lets users share only the specific contacts they want to, helping build trust and protect privacy. Alongside this tool, we are updating our policy to require that all applicable apps use the picker, or other privacy-focused alternatives like Sharesheet, as the primary way to access users’ contacts. READ_CONTACTS will be reserved for apps that can’t function without it. What you’ll need to do If your app asks for access to contacts for features like sharing or inviting, you should update your code to use the picker and remove the READ_CONTACTS permission entirely (if targeting Android 17 and above). If your app requires full, ongoing access to a user’s contact list to function, you must justify this need by submitting a Play Developer Declaration in the Play Console. This form will be available before October. Location button: More privacy-friendly way to access location Android is introducing a new, streamlined location button to make requesting precise data easier for one-time actions, like finding a store or tagging a photo. This feature replaces complex permission dialogs with a single tap, helping users make clearer choices about how much information they share and for how long. We’re updating our policy to require apps to use this button for one-time precise location access unless they require persistent, always-on location access. This creates a faster, more predictable experience for your users and reduces the friction of traditional permission requests. What you’ll need to do Review your app's location usage to ensure you are requesting the minimum amount of location data needed for your app to work. If your app targets Android 17 and above and uses precise location for discrete, temporary actions, implement the location button by adding the onlyForLocationButton flag in your manifest. If your app requires persistent precise location to function, you will need to submit a Play Developer Declaration in Play Console to show why the new button or coarse location isn’t sufficient for your app’s core features. This form will be available before October. Account Transfer: Protecting your business You asked for a secure way to transfer app ownership during business changes, and we listened. We’re launching an official account transfer feature directly in Play Console that’s designed to help you easily transfer ownership during sales and mergers while also protecting your business from fraud. Starting May 27, account ownership changes must use this official feature. That means that unofficial transfers (like sharing login credentials or buying and selling accounts on third-party marketplaces) which leave your business vulnerable are not permitted. What you’ll need to do Initiate any future account owner changes through the "Users and permissions" page in Play Console. Every transfer will include a mandatory 7-day security cool-down period. This gives your team time to spot and cancel any unauthorized attempts to take over your account. See Transferring ownership of a Play Console developer account for more guidance. What’s next We want to give you plenty of time to review these changes and update your apps. For more information, deadlines, and the full list of Google Play policy updates we’re announcing today, please visit the Policy Announcements page. Thank you for your partnership in keeping Play safe for everyone.
Google I/O 2026: Livestream Schedule Revealed Posted by The Google I/O team The Google I/O schedule is here! Tune in May 19–20 as we unveil Google’s biggest updates across AI, Android, Chrome, and Cloud. Discover new tools and features designed to unlock the future of development with agentic coding. We’re kicking things off with the Google keynote at 10:00 am PT on May 19, followed by the Developer keynote at 1:30 pm PT. Block your calendars for two days of live sessions, straight from Mountain View, full of announcements, live demos, and new professional development sessions. Here’s a sneak peak at what we’ll cover: The agentic era of development: Discover how the next evolution of our developer tools is transforming the way you write software. Learn how to seamlessly transition from rapid ideation to orchestrating powerful, autonomous workflows, allowing AI to handle the heavy lifting while you focus on the big picture. Enabling Android development anywhere: Learn how we are making AI even more helpful for your app workflows. From initial prototyping to final native polish, explore the latest ways we’re making it easier and faster to build high quality Android experiences. Building powerful, agentic web applications: The web is accelerating faster than ever, and we are equipping you for what's next. Discover new tools to build agent-ready web applications, automate complex debugging workflows, and ship highly interactive UI directly in the browser. Join us online May 19–20, followed by a fresh drop of on-demand sessions and codelabs on May 21. Register today to explore the full program and catch all the latest developer updates, featuring sessions like: What's new in Android What’s new in Google AI What's new in Chrome Build core skills to thrive as an AI-era developer
Posted by Steven Jenkins, Product Manager, Android Studio Testing multi-device interactions is now easier than ever with the Android Emulator. Whether you are building a multiplayer game, extending your mobile application across form factors, or launching virtual devices that require a device connection, the Android Emulator now natively supports these developer experiences. Previously, interconnecting multiple Android Virtual Devices (AVDs) caused significant friction. It required manually managing complex port forwarding rules just to get two emulators to connect. Now you can take advantage of a new networking stack for the Android Emulator which brings zero-configuration peer-to-peer connectivity across all your AVDs. Interconnecting emulator instances The new networking stack for the Android Emulator transforms how emulators communicate. Previously, each virtual device operated on its own local area network (LAN), effectively isolating it from other AVDs. The new Wi-Fi network stack changes this by creating a shared virtual network backplane that bridges all running instances on the same host machine. Key Benefits: Zero-configuration: No more manual port forwarding or scripting adb commands. AVDs on the same host appear on the same virtual network. Peer-to-peer connectivity: Critical protocols like Wi-Fi Direct and Network Service Discovery (NSD) work out of the box between emulators. Improved stability: Resolves long-standing stability issues, such as data loss and connection drops found in the legacy stack. Cross-platform consistency: Works the same across Windows, macOS and Linux. Use Cases Multi-device apps: Test file sharing, local multiplayer gaming, or control flows between a phone and another Android device. Continuous Integration: Create robust, automated multi-device test pipelines without flaky network scripts. Android XR & AI glasses: Easily test companion app pairing and data streaming between a phone and glasses within Android Studio. Automotive & Wear OS: Validate connectivity flows between a mobile device and a vehicle head unit or smartwatch. The new emulator networking stack allows multiple AVDs to share a virtual network, enabling direct peer-to-peer communication with zero configuration. Get Started The new networking capability is enabled by default in the latest Android Emulator release (36.5), which is available via the Android Studio SDK Manager. Just update your emulator and launch multiple devices! If you need to disable this feature or want to learn more, please refer to our documentation. As always, we appreciate any feedback. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, Medium, Youtube, or X.
Posted by Matthew Warner, Google Product Manager Every developer's AI workflow and needs are unique, and it's important to be able to choose how AI helps your development. In January, we introduced the ability to choose any local or remote AI model to power AI functionality in Android Studio, and today, we're announcing the availability of Gemma 4 for AI coding assistance in Android Studio. This new local model trained on Android development provides the best of both worlds: the privacy and cost-efficiency of on-device processing alongside state-of-the-art reasoning and tool-calling capabilities. AI assistance, locally delivered By running locally on your machine, Gemma 4 gives you AI code assistance that doesn't require an internet connection or an API key for its core operations. Key benefits include: Privacy and security: Your code stays on your machine. Gemma 4 processes all Agent Mode requests locally, making it an ideal choice for developers working with data privacy requirements or in secure corporate environments. Cost efficiency: Run complex agentic workflows without worrying about hitting quotas. Gemma 4 is optimized to run efficiently on modern development hardware, utilizing local GPU and RAM to provide snappy, responsive assistance. Offline availability: Use the agent to write code even when you don’t have an internet connection. State-of-the-art reasoning: Gemma 4 delivers best-in-class reasoning, capable of complex multi-step coding tasks in Agent Mode. Powerful agentic coding Gemma 4 was trained for Android development with agentic tool calling capabilities. When you select Gemma 4 as your local model, you can leverage Agent Mode for a variety of development use cases, such as: Designing new features: Developers can ask the agent to build a new feature or an entire app with commands like “build a calculator app” and the agent will not only generate the UI code but will use Android best practices like writing in Kotlin and using Jetpack Compose. Refactoring: You can give high-level commands such as "Extract all hardcoded strings and migrate them to strings.xml." The agent will scan your codebase, identify instances requiring changes, and apply the edits across multiple files simultaneously. Bug fixing and build resolution: If a project fails to build or has persistent lint errors, you can prompt the agent to "Build my project and fix any errors." The agent will navigate to the offending code and iteratively apply fixes until the build is successful. Recommended hardware requirements The 26B MoE is recommended for Android app developers using a machine with the minimum hardware requirements. Total RAM needed includes both Android Studio and Gemma. Model Total RAM needed Storage needed Gemma E2B 8GB 2 GB Gemma E4B 12 GB 4 GB Gemma 26B MoE 24 GB 17 GB Get started To get started, ensure you have the latest version of Android Studio installed. Install an LLM provider, such as LM Studio or Ollama, on your local computer. In Settings > Tools > AI > Model Providers add your LM Studio or Ollama instance. Download the Gemma 4 model from Ollama or LM Studio. Refer to hardware requirements for model size selection. In Agent Mode, select Gemma 4 as your active model. For a detailed walkthrough on configuration, check out the official documentation on how to use a local model. We are excited to see how Gemma 4 enables more private, secure, and powerful development workflows. As always, your feedback is essential as we continue to refine the AI experience in Android Studio. If you find a bug or issue, please file an issue. Also you can be part of our vibrant Android developer community on LinkedIn, YouTube, or X. Happy coding!
Posted by David Chou, Product Manager and Caren Chang, Developer Relations Engineer At Google, we’re committed to bringing the most capable AI models directly to the Android devices in your pocket. Today, we’re thrilled to announce the release of our latest state-of-the-art open model: Gemma 4. These models are the foundation for the next generation of Gemini Nano, so code you write today for Gemma 4 will automatically work on Gemini Nano 4-enabled devices that will be available later this year. With Gemini Nano 4, you’ll benefit from our additional performance optimizations so you can ship to production across the Android ecosystem with the most efficient on-device inference. You can get early access to this model today through the AICore Developer Preview. Select the Gemini Nano 4 Fast model in the Developer Preview UI to see its blazing fast inference speed in action before you write any code Because Gemma 4 natively supports over 140 languages, you can expect improved localized, multilingual experiences for your global audience. Furthermore, Gemma 4 offers industry-leading performance with multimodal understanding, allowing your apps to understand and process text, images, and audio. To give you the best balance of performance and efficiency, Gemma 4 on Android comes in two sizes: E4B: Designed for higher reasoning power and complex tasks. E2B: Optimized for maximum speed (3x faster than the E4B model!) and lower latency. The new model is up to 4x faster than previous versions and uses up to 60% less battery. Starting today, you can experiment with improved capabilities including: Reasoning: Chain-of-thought commands and conditional statements can now be expected to return higher quality results. For example: “Determine if the following comment for a discussion thread passes the community guidelines. The comment does not pass the community guideline if it contains one or more of these reason_for_flag: profanity, derogatory language, hate speech”. If the review passes the community guidelines, return {true}. Otherwise, return {false, reason_for_flag}.” Math: With better math skills, the model can now more accurately answer questions. For example: “If I get 26 paychecks per year, how much should I contribute each paycheck to reach my savings goal of $10,000 over the course of a year?” Time understanding: The model is now more capable when reasoning about time, making it more accurate for use cases that involve calendars, reminders, and alarms. For example: “The event is at 6PM on August 18th, and a reminder should be sent out 10 hours before the event. Return the time and date the reminder should be sent.” Image understanding: Use cases that involve OCR (Optical Character Recognition) - such as chart understanding, visual data extraction, and handwriting recognition - will now return more accurate results. Join the Developer Preview today to download these models in preview models and start building next-generation features right away. Start building with Gemma 4 Start testing the model You can try out the model without code by following the Developer Preview guide. If you want to jump straight into integrating these models with your existing workflow, we’ve made that seamless. Head over to Android Studio to refine your prompt and build with the familiar ML Kit Prompt API. We’ve introduced a new ability to specify a model, allowing you to target the E2B (fast) or E4B (full) variants for testing. // Define the configuration with a specific track and preference val previewFullConfig = generationConfig { modelConfig = ModelConfig { releaseTrack = ModelReleaseTrack.PREVIEW preference = ModelPreference.FULL } } // Initialize the GenerativeModel with the configuration val previewModel = GenerativeModel.getClient(previewFullConfig) // Verify that the specific preview model is available val previewModelStatus = previewModel.checkStatus() if (previewModelStatus == FeatureStatus.AVAILABLE) { // Proceed with inference val response = previewModel.generateContent("If I get 26 paychecks per year, how much I should contribute each paycheck to reach my savings goal of $10k over the course of a year? Return only the amount.") } else { // Handle the case where the preview model is not available // (e.g., print out log statements) } What to expect during the Developer Preview The goal of this Developer Preview is to give you a head start on refining prompt accuracy and exploring new use cases for your specific apps. We will be making several updates throughout the preview period, including support for tool calling, structured output, system prompts, and thinking mode in Prompt API, making it easier to take full advantage of the new capabilities and significant performance optimizations in Gemma 4. The preview models are available for testing on AICore-enabled devices. These models will run on the latest generation of specialized AI accelerators from Google, MediaTek, and Qualcomm Technologies. On other devices, the models will initially run on a CPU implementation that is not representative of final production performance. If your device is not AICore-enabled, you can also test these models via the AI Edge Gallery app. We’ll provide support for more devices in the future. How to get started Ready to see what Gemma 4 can do for your users? Opt-in: Sign up for the AICore Developer Preview. Download: Once opted in, you can trigger the download of the latest Gemma 4 models directly to your supported test device. Build: Update your ML Kit implementation to target the new models and start building in Android Studio.
Posted by Matthew McCullough, VP of Product Management Android Development Today, we are enhancing Android development with Gemma 4, our latest state-of-the-art open model designed with complex reasoning and autonomous tool-calling capabilities. Our vision is to enable local agentic AI on Android across the entire software lifecycle, from development to production. Android supports a range of Gemma 4 models, from the most efficient ones running directly on-device in your apps to more powerful ones running on your development machine to help you build apps. We are bringing Gemma 4 to Android developers through two pillars: Local-first Agentic coding: Experience powerful, local AI code assistance with Gemma 4 in Android Studio in your development computer. On-device intelligence: Build intelligent experiences using the ML Kit GenAI Prompt API to run Gemma 4 directly on Android device hardware. Coding with Gemma 4 in Android Studio When building Android apps, Android Studio can use Gemma 4 to leverage its state-of-the-art reasoning power and native support for tool use, while keeping the model and inference contained entirely on your local machine. Gemma 4 was trained on Android development and designed with Agent Mode in mind. This means that when you select Gemma 4 as your local model, you can leverage the full suite of Agent Mode capabilities for a variety of Android development use cases, including refactoring legacy code, building an entire app or new features, and applying fixes iteratively. Learn more about the possibilities Gemma 4 brings to your app development flow and how to get started. Prototyping with Gemma 4 on-device Since the introduction of Gemini Nano as the foundation model on Android, it has become available on over 140 million devices. Gemma 4 is the base model for the next generation of Gemini Nano (Gemini Nano 4) that is optimized for performance and quality on Android devices. This model is up to 4x faster than the previous version and uses up to 60% less battery. To make it as easy as possible to preview and prototype with Gemma 4 E2B and E4B models directly on AICore-supported devices, we’re launching the AICore Developer Preview. While we continue to expand the ML Kit GenAI Prompt API surface to unlock additional advanced capabilities of the model, you can already start exploring new use cases with Gemma 4 using the Prompt API. Prepare your apps for the launch of the Gemini Nano 4 on the new flagship Android devices later this year by prototyping with Gemma 4 today. Read about the upcoming features and deep dive into AICore Developer Preview and its Gemma 4 support here. Local agentic intelligence with Gemma 4 Running Gemma 4 locally, you can leverage its advanced reasoning and tool-calling capabilities in your entire workflow, from developing with the AI coding assistant in Android Studio to shipping intelligent features in your app with ML Kit GenAI Prompt API. This local-first approach, available under Gemma’s open Apache license, provides an alternative for developers to innovate in a privacy-centric and cost effective manner. In a future release, we will update Android Bench to include Gemma 4 and other open models, providing the quantified data you need to navigate performance trade-offs and select the best model for your use case. We can’t wait to see what you build!
/* Responsive CSS for Mobile */ .post-body-wrapper { line-height: 1.6; font-family: inherit; } .post-body-wrapper img { max-width: 100%; height: auto; display: block; } /* On mobile, stack images. On desktop (wider than 600px), use floats. */ @media screen and (min-width: 601px) { .float-right { float: right; margin: 0 0 1em 1em; } .float-left { float: left; margin: 0 1em 1em 0; } } .code-span { color: #188038; font-family: "Roboto Mono", monospace; font-size: 11pt; } .separator { clear: both; text-align: center; } Posted by Matt Dyor, Senior Product Manager Android Studio Panda 3 is now stable and ready for you to use in production. This release gives you even more control and customization over your AI-powered workflows, making it easier than ever to build high-quality Android apps. Whether you're bringing new capabilities to an existing app or standing up a brand new app, these updates elevate your development experience by allowing your AI Agent in Android Studio to learn your specific practices and giving you granular control over its permissions. Lastly, in addition to AI skills and Agent Mode enchantments, Android Studio Panda 3 also includes updated support for build Android apps for cars. Here’s a deep dive into what’s new: Agent skills Create a more helpful AI agent by using agent skills in Android Studio. Agent skills are specialized instructions that teach the agent new capabilities and best practices for a specific workflow, which the agent can then leverage as needed. This significantly reduces the level of detail required for your day-to-day prompts. Agent skills work with Gemini in Android Studio or with other remote 3rd party LLMs you integrate into the agent framework in Android Studio. You and members of your team can create skills that tell the agent exactly how you want to handle specific tasks in your codebase. For example, you could create a custom “code review” skill tailored to your organization's coding standards, or custom skill to provide the agent with more information on using an in-house library. Once you have created a skill, the agent will be able to use it automatically, or you can manually trigger it by typing @ followed by the skill name. Check out the documentation to learn more about how to create skills for your codebase, or better yet—ask your agent to help you build a new skill and it will guide you through the details! Manually Trigger Agent Skill in Android Studio Getting Started To build a skill for your project, do the following: Create a .skills directory inside your project's root folder. Place a SKILL.md file inside this new directory. Add a name and description to the file to define your custom workflow, and your skill is ready. Optionally include scripts, assets, and references to provide even more guidance to your agent. Agent skills in Android Studio Manage permissions for Agent Mode You control your codebase, and you can now be more deliberate with which data and capabilities you choose to share with AI agents. The new granular agent permissions in Android Studio let you decide exactly what agents can do for you. When Agent Mode needs to read files, run shell commands, or access the web, it explicitly asks for your permission. We know that 'approval fatigue' is a real risk in AI workflows—when a tool asks for permission too often, it’s easy to start clicking 'Allow' without fully reviewing the action. By offering granular 'Always Allow' rules for trusted operations and an optional sandbox for experimental ones, Android Studio helps you stay focused on the high-stakes decisions that actually require your manual sign-off. Agent Permissions Agent permissions are intuitive to set up and use. For example, granting high-level permissions automatically authorizes related sub-tools, while commands you have previously approved will run automatically without interrupting your flow. Rest assured, accessing sensitive files like SSH keys will always require your explicit sign-off. For even more security, you can also use an optional sandbox to enforce strict, isolated control over the agent. Agent Shell Sandbox Empty Car App Library App template We’re making it easier to build Android apps for cars. Building apps for the car used to mean wrestling with complex configurations just to get the project to build successfully. Now, you can accelerate your development with the new “Empty Car App Library App” template in Android Studio. This template takes care of the required boilerplate code for a driving-optimized app on both Android Auto and Android Automotive OS, saving you significant time and effort. Instead of getting bogged down in setup, you can focus on creating the best experience for your users on the road. Getting Started To use the new template: Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project). Search for or select the Empty Car App Library App template. Name your app and click Finish to generate your driving-optimized app. Empty Car App Library App template Android Studio Panda releases Panda 3 builds off last month’s AI-focused Panda 2 release. Check out Go from prompt to working prototype with Android Studio Panda 2 post to learn more about new Android Studio features, including the AI-powered New Project Flow that takes you from prompt to prototype and the Version Upgrade Assistant that takes the toil out of updating your dependencies. Get started Dive in and accelerate your development. Download Android Studio Panda 3 and start exploring these powerful new agentic features today. As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
Posted by Michael Stillwell, Developer Relations Engineer and Dimitris Kosmidis, Product Manager, Wear OS 64-bit architectures provide performance improvements and a foundation for future innovation, delivering faster and richer experiences for your users. We’ve supported 64-bit CPUs since Android 5. This aligns Wear OS with recent updates for Google TV and other form factors, building on the 64-bit requirement first introduced for mobile in 2019. Today, we are extending this 64-bit requirement to Wear OS. This blog provides guidance to help you prepare your apps to meet these new requirements. The 64-bit requirement: timeline for Wear OS developers Starting September 15, 2026: All new apps and app updates that include native code will be required to provide 64-bit versions in addition to 32-bit versions when publishing to Google Play. Google Play will start blocking the upload of non-compliant apps to the Play Console. We are not making changes to our policy on 32-bit support, and Google Play will continue to deliver apps to existing 32-bit devices. The vast majority of Wear OS developers has already made this shift, with 64-bit compliant apps already available. For the remaining apps, we expect the effort to be small. Preparing for the 64-bit requirement Many apps are written entirely in non-native code (i.e. Kotlin or Java) and do not need any code changes. However, it is important to note that even if you do not write native code yourself, a dependency or SDK could be introducing it into your app, so you still need to check whether your app includes native code. Assess your app Inspect your APK or app bundle for native code using the APK Analyzer in Android Studio. Look for .so files within the lib folder. For ARM devices, 32-bit libraries are located in lib/armeabi-v7a, while the 64-bit equivalent is lib/arm64-v8a. Ensure parity: The goal is to ensure that your app runs correctly in a 64-bit-only environment. While specific configurations may vary, for most apps this means that for each native 32-bit architecture you support, you should include the corresponding 64-bit architecture by providing the relevant .so files for both ABIs. Upgrade SDKs: If you only have 32-bit versions of a third-party library or SDK, reach out to the provider for a 64-bit compliant version. How to test 64-bit compatibility The 64-bit version of your app should offer the same quality and feature set as the 32-bit version. The Wear OS Android Emulator can be used to verify that your app behaves and performs as expected in a 64-bit environment. Note: Since Wear OS apps are required to target Wear OS 4 or higher to be submitted to Google Play, you are likely already testing on these newer, 64-bit only images. When testing, pay attention to native code loaders such as SoLoader or older versions of OpenSSL, which may require updates to function correctly on 64-bit only hardware. Next steps We are announcing this requirement now to give developers a six-month window to bring their apps into compliance before enforcement begins in September 2026. For more detailed guidance on the transition, please refer to our in-depth documentation on supporting 64-bit architectures. This transition marks an exciting step for the future of Wear OS and the benefits that 64-bit compatibility will bring to the ecosystem.
Posted by Andrew Lewis, Software Engineer Media3 1.10 is out! Media3 1.10 includes new features, bug fixes and feature improvements, including Material3-based playback widgets, expanded format support in ExoPlayer and improved speed adjustment when exporting media with Transformer. Read on to find out more, and check out the full release notes for a comprehensive list of changes. Playback UI and Compose We are continuing to expand the media3-ui-compose-material3 module to help you build Compose UIs for playback. We've added a new Player Composable that combines a ContentFrame with customizable playback controls, giving you an out-of-the-box player widget with a modern UI. This release also adds a ProgressSlider Composable for displaying player progress and performing seeks using dragging and tapping gestures. For playback speed management, a new PlaybackSpeedControl is available in the base media3-ui-compose module, alongside a styled PlaybackSpeedToggleButton in the Material 3 module. We'll continue working on new additions like track selection utils, subtitle support and more customization options in the upcoming Media3 releases. We're eager to hear your feedback so please share your thoughts on the project issue tracker. Player Composable in the Media3 Compose demo app Playback feature enhancements Media3 1.10 includes a variety of additions and improvements across the playback modules: Format support: ExoPlayer now supports extracting Dolby Vision Profile 10 and Versatile Video Coding (VVC) tracks in MP4 containers, and we've introduced MPEG-H UI manager support in the decoder_mpeghextension. The IAMF extension now seamlessly supports binaural output, either through the decoder viaiamf_tools or through the Android OS Spatializer, with new logic to match the output layout of the speakers. Ad playback: Improvements to reliability, improved HLS interstitial support forX-PLAYOUT-LIMIT and X-SNAP, and with the latest IMA SDK dependency you can control whether ad click-through URLs open in custom tabs with setEnableCustomTabs. HLS: ExoPlayer now allows location fallback upon encountering load errors if redundant streams from different locations are available. Session: MediaSessionService now extends LifecycleService, allowing apps to access the lifecycle scoping of the service. One of our key focus areas this year is on playback efficiency and performance. Media3 1.10 includes experimental support for scheduling the core playback loop in a more efficient way. You can try this out by enabling experimentalSetDynamicSchedulingEnabled() via the ExoPlayer.Builder. We plan to make further improvements in future releases so stay tuned! Media editing and Transformer For developers building media editing experiences, we've made speed adjustments more robust. EditedMediaItem.Builder.setFrameRate()can now set a maximum output frame rate for video. This is particularly helpful for controlling output size and maintaining performance when increasing media speed with setSpeed(). New modules for frame extraction and applying Lottie effects In this release we've split some functionality into new modules to reduce the scope of some dependencies: FrameExtractor has been removed from the main media3-inspector module, so please migrate your code to use the new media3-inspector-framemodule and update your imports toandroidx.media3.inspector.frame.FrameExtractor. We have also moved theLottieOverlayeffect to a separate media3-effect-lottie module. As a reminder, this gives you a straightforward way to apply vector-based Lottie animations directly to video frames. Please get in touch via the issue tracker if you run into any bugs, or if you have questions or feature requests. We look forward to hearing from you!
.post-container { font-family: 'Segoe UI', Roboto, Helvetica, Arial, sans-serif; line-height: 1.6; color: #333; max-width: 800px; margin: auto; } .highlight-box { background-color: #f2f2f2; border-left: 5px solid #666; padding: 20px; margin: 25px 0; } .stats-table { width: 100%; border-collapse: collapse; margin: 20px 0; } .stats-table th, .stats-table td { border: 1px solid #ccc; padding: 12px; text-align: left; } .stats-table th { background-color: #e0e0e0; color: #222; } .code-block { background-color: #f7f7f7; color: #333; padding: 18px; border-radius: 4px; overflow-x: auto; font-family: 'Consolas', 'Monaco', monospace; font-size: 14px; margin: 20px 0; border: 1px solid #ddd; } .quote { font-style: italic; font-size: 1.1em; color: #444; padding: 10px 25px; border-left: 4px solid #bbb; margin: 30px 0; } h2 { color: #222; /* Section line removed here */ padding-bottom: 8px; margin-top: 30px; } a { color: #444; text-decoration: underline; } .bullet-point { color: #777; margin-right: 8px; } .separator img { max-width: 100%; height: auto; } Posted by Ben Weiss, Senior Developer Relations Engineer Unlocking broad performance wins with R8 full mode ● Startup Reliability: Cold starts improved by 30%, Warm starts by 24%, and Hot starts by 14%. ● Launch Speed: P50 launch times improved by 11% and P90 launch times by 12%. ● Efficiency: Overall app size was reduced by 9%. ● Stability: ANR reduction of 35%. Enabling optimizations with a single change Many Android apps use an outdated default configuration file which disables most functionality of the R8 optimizer. The main change Monzo made to unlock these performance improvements was to replace the proguard-android.txt default file with proguard-android-optimize.txt. This change removes the -dontoptimize instruction and allows R8 to properly do its job. buildTypes { release { isMinifyEnabled = true isShrinkResources = true proguardFiles( getDefaultProguardFile("proguard-android-optimize.txt"), ) } } After making this change, it's worth looking at your Keep configuration files. These files tell R8 which parts of your code to leave alone (usually because they're called dynamically or by external libraries). Tidying up unnecessary Keep rules means R8 can do more. Improving scroll performance with Baseline Profiles Baseline Profiles, specifically targeting scroll and rendering performance on their main feed. This strategy ensured that the most common user journeys—opening the app and scrolling the feed—were fully optimized. The impact on rendering was substantial: P90 scroll performance became 71% faster, and P95 scroll performance improved by 87%. Now scrolling the app is smoother than before. Monzo built this into their release process to maintain these improvements over time. "We trigger the baseline profile generation every week day (before running our nightly builds) and commit the latest changes once completed," Neumayer explains. Keeping up with modern Android development Neumayer's advice for other teams? Regularly check your practices against current standards: "Take a look at the latest recommendations from Google around app performance and check if you're following all the latest advice." To get started and learn more about R8, visit https://d.android.com/r8
Posted by Matthew Forsythe, Director Product Management, Android App Safety Android is for everyone. It’s built on a commitment to an open and safe platform. Users should feel confident installing apps, no matter where they get them from. However, our recent analysis found over 90 times more malware from sideloaded sources than on Google Play. So as an extra layer of security, we are rolling out Android developer verification to help prevent malicious actors from hiding behind anonymity to repeatedly spread harm. Over the past several months, we’ve worked closely with the community to improve the design so we account for the many ways people use Android to balance openness with safety. Start your verification today Today, we’re starting to roll out Android developer verification to all developers in both the new Android Developer Console and Play Console. This allows you to complete your verification and register your apps before user-facing changes begin later this year. If you only distribute apps outside of Google Play, you can create an account in Android Developer Console today. If you're on Google Play, check your Play Console account for updates over the next few weeks. If you’ve already verified your identity here, then you’re likely already set. Most of your users’ download experience will not change at all While verification tools are rolling out now, the experience for users downloading your apps will not change until later this year. The user side protections will first go live in Brazil, Indonesia, Singapore, and Thailand this September, before expanding globally in 2027. We’ve shared this timeline early to ensure you have ample time to complete your verification. Following this deadline, for the vast majority of users, the experience of installing apps will stay exactly the same. It’s only when a user tries to install an unregistered app that they’ll require ADB or advanced flow, helping us keep the broader community safe while preserving the flexibility for our power users. Developers can still choose where to distribute their apps. Most users’ download experience will not change Tailoring the verification experience to your feedback To balance the need for safety with our commitment to openness, we’ve improved the verification experience based on your feedback. We’ve streamlined the developer experience to be more integrated with existing workflows and maintained choice for power users. For Android Studio developers: In the next two months, you’ll see your app's registration status right in Android Studio when you generate a signed App Bundle or APK. You’ll see your app's registration status in Android Studio when you generate a signed App Bundle or APK. For Play developers: If you've completed Play Console's developer verification requirements, your identity is already verified and we'll automatically register eligible Play apps for you. In the rare case that we are unable to register your apps for you, you will need to follow the manual app claim process. Over the next couple of weeks, more details will be provided in the Play Console and through email. Also, you’ll be able to register apps you distribute outside of Play in the Play Console too. The Android developer verification page in your Play Console will show the registration status for each of your apps. For students and hobbyists: To keep Android accessible to everyone, we're building a free, no government ID required, limited distribution account so you can share your work with up to 20 devices. You only need an email account to get started. Sign up for early access. We’ll send invites in June. For power users: We are maintaining the choice to install apps from any source. You can use the new advanced flow for sideloading unregistered apps or continue using ADB. This maintains choice while protecting vulnerable users. What’s next? We’re rolling this out carefully and working closely with developers, users, and our partners. In April, we’ll introduce Android Developer Verifier, a new Google system service that will be used to check if an app is registered to a verified developer. April 2026: Users will start to see Android Developer Verifier in their Google Systems services settings. June 2026: Early access: Limited distribution accounts for students and hobbyists. August 2026: Limited distribution accounts launch globally. Advanced flow for power users launches globally. September 30, 2026: Apps must be registered by verified developers in order to be installed and updated on certified Android devices in Brazil, Indonesia, Singapore, and Thailand. Unregistered apps can be sideloaded with ADB or advanced flow. 2027 and beyond: We will roll out this requirement globally. We’re committed to an Android that is both open and safe. Check out our developer guides to get started today.
Posted by Robert Clifford, Developer Relations Engineer and Manjeet Rulhania, Software Engineer A pillar of the Android ecosystem is our shared commitment to user trust. As the mobile landscape has evolved, so does our approach to protecting sensitive information. In Android 17, we’re introducing a suite of new location privacy features designed to give users more control and provide developers elegant solutions for data minimization and product safety. Our strategy focuses on introducing new tools to balance high-quality experiences with robust privacy protections, and improving transparency for users to help manage their data. Introducing the location button: simplified access for one time use For many common tasks, like finding a nearby shop or tagging a social post, your app doesn’t need permanent or background access to a user's precise location.With Android 17, we are introducing the location button, a new UI element designed to provide a well-lit path for responsible one time precise location access. Industry partners have requested this new feature as a way to bring a simpler, and more private location flow to their users. Users get better privacy protection Moving the decision making for location sharing to the point where a user takes action, helps the user make a clearer choice about how much information they want to share and for how long. This empowers users to limit data sharing to only what apps need in that session. Once consent is provided, this session based access eliminates repeated prompts for location dependent features. This benefits developers by creating a smoother experience for their users and providing high confidence in user intent, as access is explicitly requested at the moment of action. Full UI customization to match your app’s aesthetic The location button provides extensive customization options to ensure integration with your app's aesthetic while maintaining system-wide recognizability. You can modify the button's visual style including: Background and icon color scheme Outline style Size and shape Additionally, you can select the appropriate text label from a predefined list of options. To ensure security and trust, the location icon itself remains mandatory and non-customizable, while the font size is system-managed to respect user accessibility settings Simplified Integration with Jetpack and automatic backwards compatibility The location button will be provided as a Jetpack library, ensuring easy integration into your existing app layouts similar to any other Jetpack view implementation, and simplifying how you request permission to access precise location. Additionally, when you implement location button with the Jetpack library it will automatically handle backwards compatibility by defaulting to the existing location prompt when a user taps it on a device running Android 16 or below. The Android location button is available for testing as of Android 17 Beta 3. Location access transparency Users often struggle to understand the tools they can use to monitor and control access to their location data. In Android 17, we are aligning location permission transparency with the high standards already set for the Microphone and Camera. Updated Location Indicator: A persistent indicator will now appear to inform a user whenever a non-system app accesses their location Attribution & Control: Users can tap the indicator to see exactly which apps have recently accessed their location and manage those permissions immediately through a "Recent app use" dialog. Strengthening user privacy with density-based Coarse Location Android 17 is also improving the algorithm for approximate (coarse) locations to be aware of population density. Previously, coarse locations used a static 2 km-wide grid, which in low-population areas may not be sufficiently private since a 2km square could often contain only a handful of users. The new approach replaces this fixed grid with a dynamically-sized area based on local population density. By increasing the grid for areas with lower population density, Android ensures a more consistent privacy guarantee across different environments from dense urban centers to remote regions. Improved runtime permission dialog The runtime permission dialog for location is one of the more complex flows for users to navigate, with users being asked to decide on the granularity and length of permission access they are willing to grant to each app. In an effort to help users to make the most informed privacy decisions with less friction, we’ve redesigned the dialog to make "Precise" and "Approximate" choices more visually distinct, encouraging users to select the level of access which best suits their needs. Start building for Android 17 The new location privacy tools are available now in Beta 3. We’re looking for your feedback to help refine these features before the general release. Feedback: Report issues on the [Official Tracker]. Build a smoother, more private experience today.
Posted by Matthew McCullough, VP of Product Management, Android Developer Android 17 has officially reached platform stability today with Beta 3. That means that the API surface is locked; you can perform final compatibility testing and push your Android 17-targeted apps to the Play Store. In addition, Beta 3 brings a host of new capabilities to help you build better, more secure, and highly integrated applications. Get your apps, libraries, tools, and game engines ready! If you develop an SDK, library, tool, or game engine, it's even more important to prepare any necessary updates now to prevent your downstream app and game developers from being blocked by compatibility issues and allow them to target the latest SDK features. Please let your downstream developers know if updates are needed to fully support Android 17. Testing involves installing your production app or a test app making use of your library or engine using Google Play or other means onto a device or emulator running Android 17 Beta 3. Work through all your app's flows and look for functional or UI issues. Review the behavior changes to focus your testing. Each release of Android contains platform changes that improve privacy, security, and overall user experience, and these changes can affect your apps. Here are some changes to focus on: Resizability on large screens: Once you target Android 17, you can no longer opt out of maintaining orientation, resizability and aspect ratio constraints on large screens. Dynamic code loading: If your app targets Android 17 or higher, the Safer Dynamic Code Loading (DCL) protection introduced in Android 14 for DEX and JAR files now extends to native libraries. All native files loaded using System.load() must be marked as read-only. Otherwise, the system throws UnsatisfiedLinkError. Enable CT by default: Certificate transparency (CT) is enabled by default. (On Android 16, CT is available but apps had to opt in.) Local network protections: Apps targeting Android 17 or higher have local network access blocked by default. Switch to using privacy preserving pickers if possible, and use the new ACCESS_LOCAL_NETWORK for broad, persistent access. Media and camera enhancements Photo Picker customization options Android now allows you to tailor the visual presentation of the photo picker to better complement your app’s user interface. By leveraging the new PhotoPickerUiCustomizationParams API, you can modify the grid view aspect ratio from the standard 1:1 square to a 9:16 portrait display. This flexibility extends to both the ACTION_PICK_IMAGES intent and the embedded photo picker, enabling you to maintain a cohesive aesthetic when users interact with media. This is all part of our effort to help make the privacy-preserving Android photo picker fit seamlessly with your app experience. Learn more about how you can embed the photo picker directly into your app for the most native experience. val params = PhotoPickerUiCustomizationParams.Builder() .setAspectRatio(PhotoPickerUiCustomizationParams.ASPECT_RATIO_PORTRAIT_9_16) .build() val intent = Intent(MediaStore.ACTION_PICK_IMAGES).apply { putExtra(MediaStore.EXTRA_PICK_IMAGES_UI_CUSTOMIZATION_PARAMS, params) } startActivityForResult(intent, REQUEST_CODE) Support for the RAW14 image format: Android 17 introduces support for the RAW14 image format — the de-facto industry standard for high-end digital photography — via the new ImageFormat.RAW14 constant. RAW14 is a single-channel, 14-bit per pixel format that uses a densely packed layout where every four consecutive pixels are packed into seven bytes. Vendor-defined camera extensions: Android 17 adds Vendor-defined extensions to enable hardware partners define and implement custom camera extension modes to provide you access to the best and latest camera features, such as 'Super Resolution' or cutting-edge AI-driven enhancements. You can query for these modes using the isExtensionSupported(int) API. Camera device type APIs: New Android 17 APIs allow you to query the underlying device type to identify if a camera is built-in hardware, an external USB webcam, or a virtual camera. Bluetooth LE Audio hearing aid support Android now includes a specific device category for Bluetooth Low Energy (BLE) Audio hearing aids. With the addition of the AudioDeviceInfo.TYPE_BLE_HEARING_AID constant, your app can now distinguish hearing aids from regular headsets. val audioManager = getSystemService(Context.AUDIO_SERVICE) as AudioManager val devices = audioManager.getDevices(AudioManager.GET_DEVICES_OUTPUTS) val isHearingAidConnected = devices.any { it.type == AudioDeviceInfo.TYPE_BLE_HEARING_AID } Granular audio routing for hearing aids Android 17 allows users to independently manage where specific system sounds are played. They can choose to route notifications, ringtones, and alarms to connected hearing aids or the device's built-in speaker. Extended HE-AAC software encoder Android 17 introduces a system-provided Extended HE-AAC software encoder. This encoder supports both low and high bitrates using unified speech and audio coding. You can access this encoder via the MediaCodec API using the name c2.android.xheaac.encoder or by querying for the audio/mp4a-latm MIME type. val encoder = MediaCodec.createByCodecName("c2.android.xheaac.encoder") val format = MediaFormat.createAudioFormat(MediaFormat.MIMETYPE_AUDIO_AAC, 48000, 1) format.setInteger(MediaFormat.KEY_BIT_RATE, 24000) format.setInteger(MediaFormat.KEY_AAC_PROFILE, MediaCodecInfo.CodecProfileLevel.AACObjectXHE) encoder.configure(format, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE) Performance and Battery Enhancements Reduce wakelocks with listener support for allow-while-idle alarms Android 17 introduces a new variant of AlarmManager.setExactAndAllowWhileIdle that accepts an OnAlarmListener instead of a PendingIntent. This new callback-based mechanism is ideal for apps that currently rely on continuous wakelocks to perform periodic tasks, such as messaging apps maintaining socket connections. val alarmManager = getSystemService(AlarmManager::class.java) val listener = AlarmManager.OnAlarmListener { // Do work here } alarmManager.setExactAndAllowWhileIdle( AlarmManager.ELAPSED_REALTIME_WAKEUP, SystemClock.elapsedRealtime() + 60000, listener, null ) Privacy updates System-provided Location Button Android is introducing a system-rendered location button that you will be able to embed directly into your app's layout using an Android Jetpack library. When a user taps this system button, your app is granted precise location access for the current session only. To implement this, you need to declare the USE_LOCATION_BUTTON permission. Discrete password visibility settings for touch and physical keyboards This feature splits the existing "Show passwords" system setting into two distinct user preferences: one for touch-based inputs and another for physical (hardware) keyboard inputs. Characters entered via physical keyboards are now hidden immediately by default. val isPhysical = event.source and InputDevice.SOURCE_KEYBOARD == InputDevice.SOURCE_KEYBOARD val shouldShow = android.text.ShowSecretsSetting.shouldShowPassword(context, isPhysical) Security Enforced read-only dynamic code loading To improve security against code injection attacks, Android now enforces that dynamically loaded native libraries must be read-only. If your app targets Android 17 or higher, all native files loaded using System.load() must be marked as read-only beforehand. val libraryFile = File(context.filesDir, "my_native_lib.so") // Mark the file as read-only before loading to comply with Android 17+ security requirements libraryFile.setReadOnly() System.load(libraryFile.absolutePath) Post-Quantum Cryptography (PQC) Hybrid APK Signing To prepare for future advancements in quantum computing, Android is introducing support for Post-Quantum Cryptography (PQC) through the new v3.2 APK Signature Scheme. This scheme utilizes a hybrid approach, combining a classical signature with an ML-DSA signature. User experience and system UI Better support for widgets on external displays This feature improves the visual consistency of app widgets when they are shown on connected or external displays with different pixel densities using DP or SP units. val options = appWidgetManager.getAppWidgetOptions(appWidgetId) val displayId = options.getInt(AppWidgetManager.OPTION_APPWIDGET_DISPLAY_ID) val remoteViews = RemoteViews(context.packageName, R.layout.widget_layout) remoteViews.setViewPadding( R.id.container, 16f, 8f, 16f, 8f, TypedValue.COMPLEX_UNIT_DIP ) Hidden app labels on the home screen Android now provides a user setting to hide app names (labels) on the home screen workspace. Ensure your app icon is distinct and recognizable. Desktop Interactive Picture-in-Picture Unlike traditional Picture-in-Picture, these pinned windows remain interactive while staying always-on-top of other application windows in desktop mode. val appTask: ActivityManager.AppTask = activity.getSystemService(ActivityManager::class.java).appTasks[0] appTask.requestWindowingLayer( ActivityManager.AppTask.WINDOWING_LAYER_PINNED, context.mainExecutor, object : OutcomeReceiver<Int, Exception> { override fun onResult(result: Int) { if (result == ActivityManager.AppTask.WINDOWING_LAYER_REQUEST_GRANTED) { // Task successfully moved to pinned layer } } override fun onError(error: Exception) {} } ) Redesigned screen recording toolbar Core functionality VPN app exclusion settings By using the new ACTION_VPN_APP_EXCLUSION_SETTINGS Intent, your app can launch a system-managed Settings screen where users can select applications to bypass the VPN tunnel. val intent = Intent(Settings.ACTION_VPN_APP_EXCLUSION_SETTINGS) if (intent.resolveActivity(packageManager) != null) { startActivity(intent) } OpenJDK 25 and 21 API updates This update brings extensive features and refinements from OpenJDK 21 and OpenJDK 25, including the latest Unicode support and enhanced SSL support for named groups in TLS. Get started with Android 17 You can enroll any supported Pixel device or use the 64-bit system images with the Android Emulator. Compile against the new SDK and report issues on the feedback page. Test your current app for compatibility and learn whether your app is affected by changes in Android 17. For complete information, visit the Android 17 developer site.
Posted by Robbie McLachlan, Developer Marketing The wait is over! We are incredibly excited to share the Google Play Apps Accelerator class of 2026. We’ve handpicked a group of high-potential studios from across the globe to embark on a 12-week journey designed to supercharge their success. Here’s what’s in store for the program’s first ever class: Curated learning: virtual masterclasses and workshops led by industry trailblazers. Guidance & mentorship: 1-to-1 sessions covering everything from technical scaling to leadership. Direct access: exclusive sessions with experts from Google and the world's top studios. Without further ado, join us in congratulating them! Google Play Apps Accelerator | Class of 2026 Americas Anytune AstroVeda BetterYou Changed Focus Forge Human Program Know Your Lemons kweliTV Language Innovation Matraquinha MR ROCCO MUU nutrition NKENNE Skarvo Starcrossed Wishfinity Asia Pacific Human Health Kitakuji Lazy Surfers Mellers Tech Reehee Company Europe, Middle East & Africa cabuu Class54 Education Digital Garden EverPixel Geolives HelloMind ifal Idea Accelerator Maposcope Ochy Picastro Pixelbite Record Scanner Talkao unorderly Xeropan International Congratulations again to all the founders selected, we can’t wait to see your apps grow on our platform. The Google Play Apps Accelerator is part of our mission to help businesses of all sizes grow on Google Play and reach their full potential. Discover more about Google Play’s programs, resources and tools.
Posted by Roxanna Aliabadi Walker, Senior Product Manager Privacy and user control remain at the heart of the Android experience. Just as the photo picker made media sharing secure and easy to implement, we are now bringing that same level of privacy, simplicity, and great user experience to contact selection. A New Standard for Contact Privacy Historically, applications requiring access to a specific user's contacts relied on the broad READ_CONTACTS permission. While functional, this approach often granted apps more data than necessary. The new Android Contact Picker, introduced in Android 17, changes this dynamic by providing a standardized, secure, and searchable interface for contact selection. This feature allows users to grant apps access only to the specific contacts they choose, aligning with Android's commitment to data transparency and minimized permission footprints. How It Works Developers can integrate the Contact Picker using the Intent.ACTION_PICK_CONTACTS intent. This updated API offers several powerful capabilities: Granular Data Requests: Apps can specify exactly which fields they need, such as phone numbers or email addresses, rather than receiving the entire contact record. Multi-Selection Support: The picker supports both single and multiple contact selections, giving developers more flexibility for features like group invitations. Selection Limits: Developers can set custom limits on the number of contacts a user can select at one time. Temporary Access: Upon selection, the system returns a Session URI that provides temporary read access to the requested data, ensuring that access does not persist longer than necessary. Access to other profiles: When using this new intent, the interface will allow users to select contents from other user profiles such as a work profile, cloned profile or a private space. Optimized Performance: The Contact Picker returns a single Uri that allows for collective result querying, eliminating the need to query individual contact Uri separately as required by ACTION_PICK. This efficiency further reduces system overhead by utilizing a single Binder transaction. Backward Compatibility and Implementation For devices running Android 17 or higher, the system automatically upgrades legacy ACTION_PICK intents that specify contact data types to the new, more secure interface. However, to take full advantage of advanced features like multi-selection, developers are encouraged to update their implementation code and utilize the ContentResolver to query the returned Session URI. Integrate the contact pickerTo integrate the Contact Picker, developers use the ACTION_PICK_CONTACTS intent. Below is a code example demonstrating how to launch the picker and request specific data fields, such as email and phone numbers. // State to hold the list of selected contacts var contacts by remember { mutableStateOf<List<Contact>>(emptyList()) } // Launcher for the Contact Picker intent val pickContact = rememberLauncherForActivityResult(StartActivityForResult()) { if (it.resultCode == Activity.RESULT_OK) { val resultUri = it.data?.data ?: return@rememberLauncherForActivityResult // Process the result URI in a background thread coroutine.launch { contacts = processContactPickerResultUri(resultUri, context) } } } // Define the specific contact data fields you need val requestedFields = arrayListOf( Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE, ) // Set up the intent for the Contact Picker val pickContactIntent = Intent(ACTION_PICK_CONTACTS).apply { putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5) putStringArrayListExtra( EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, requestedFields ) putExtra(EXTRA_PICK_CONTACTS_MATCH_ALL_DATA_FIELDS, false) } // Launch the picker pickContact.launch(pickContactIntent) After the user makes a selection, the app processes the result by querying the returned Session URI to extract the requested contact information. // Data class representing a parsed Contact with selected details data class Contact(val id: String, val name: String, val email: String?, val phone: String?) // Helper function to query the content resolver with the URI returned by the Contact Picker. // Parses the cursor to extract contact details such as name, email, and phone number private suspend fun processContactPickerResultUri( sessionUri: Uri, context: Context ): List<Contact> = withContext(Dispatchers.IO) { // Define the columns we want to retrieve from the ContactPicker ContentProvider val projection = arrayOf( ContactsContract.Contacts._ID, ContactsContract.Contacts.DISPLAY_NAME_PRIMARY, ContactsContract.Data.MIMETYPE, // Type of data (e.g., email or phone) ContactsContract.Data.DATA1, // The actual data (Phone number / Email string) ) val results = mutableListOf<Contact>() // Note: The Contact Picker Session Uri doesn't support custom selection & selectionArgs. context.contentResolver.query(sessionUri, projection, null, null, null)?.use { cursor -> // Get the column indices for our requested projection val contactIdIdx = cursor.getColumnIndex(ContactsContract.Contacts._ID) val mimeTypeIdx = cursor.getColumnIndex(ContactsContract.Data.MIMETYPE) val nameIdx = cursor.getColumnIndex(ContactsContract.Contacts.DISPLAY_NAME_PRIMARY) val data1Idx = cursor.getColumnIndex(ContactsContract.Data.DATA1) while (cursor.moveToNext()) { val contactId = cursor.getString(contactIdIdx) val mimeType = cursor.getString(mimeTypeIdx) val name = cursor.getString(nameIdx) ?: "" val data1 = cursor.getString(data1Idx) ?: "" // Determine if the current row represents an email or a phone number val email = if (mimeType == Email.CONTENT_ITEM_TYPE) data1 else null val phone = if (mimeType == Phone.CONTENT_ITEM_TYPE) data1 else null // Add the parsed contact to our results list results.add(Contact(contactId, name, email, phone)) } } return@withContext results } Check out the full documentation here. Best Practices for Developers To provide the best user experience and maintain high security standards, we recommend the following: Data Minimization: Only request the specific data fields (e.g., email) your app needs. Immediate Persistence: Persist selected data immediately, as the Session URI access is temporary.
Posted by Eser Erdem, Senior Engineering Manager, Android Automotive At Google we’re deeply committed to the automotive industry--not just as a technology provider, but as a partner in the industry's transformation. We believe that car makers and users should have choice and flexibility, and that open platforms are the best enablers. For over a decade, we have provided Android Automotive OS (AAOS) as an open platform for infotainment, enabling rich innovation and differentiation in the in-vehicle digital experience. However, as vehicles modernize, car makers face new hurdles: fragmented software across compute components, poor portability between architectures, and a lack of granular update capabilities. To address these problems, we are expanding AAOS beyond infotainment with Android Automotive OS for Software Defined Vehicles (AAOS SDV)--an open platform featuring a modular structure, a topology-agnostic communication layer, and the support for granular updates. The transition toward SDVs is an incredible industry transformation, and we are eager to contribute to the broader ecosystem making it happen. Later this year, AAOS SDV will be available in the Android Open Source Project (AOSP) for uses beyond infotainment. By bringing our SDV platform into the open-source domain, we empower the industry to develop or enhance features that lower costs, accelerate time to market, and provide significant advantages across the automotive landscape. A Foundation for the Software-Defined Vehicle AAOS SDV is engineered to address the core challenges of modern vehicle development. This new AAOS expansion provides a compact, performant and scalable software foundation based on a headless Android native stack, extending much deeper into the vehicle architecture to power software components throughout the vehicle such as the seat actuator, instrument cluster, climate control, lighting, cameras, mirrors, telemetry, and more. AAOS SDV’s core is a lightweight Android-based operating system incorporating low-level automotive specific frameworks for communications, diagnostics, software updates, and more. This enables AAOS SDV to power many different vehicle controllers, tackling Core Compute, Body Controls, and Cluster domains. In addition, the AAOS SDV platform includes a new framework, Display Safety, for implementing instrument cluster applications including audible chimes, regulatory camera, and sophisticated graphics that blend seamlessly with AAOS IVI content. Display Safety includes a safety design toolchain and a reference safety monitor, allowing OEMs to meet functional safety requirements leveraging the diverse platform safety mechanisms of Automotive SoCs. Flexible Deployment for AAOS SDV Engineered for flexibility, the AAOS SDV framework can utilize hypervisor-backed virtualization with virtio support to separate software domains, or it can be deployed on bare metal for optimal low-latency performance. Transforming the Developer Experience AAOS SDV is designed to power modern vehicles, but it was also designed to change how modern vehicle software is developed, tested and delivered with the goals to reduce development time and cost while increasing innovation and agility. With its optimized development workflows, our open-source SDV platform provides a wide range of benefits across the automotive industry: Accelerated Time-to-Market: AAOS SDV components can accelerate development with production ready software for various components that can be further modified. Standard Signal Catalog: A new standard signal catalog to bring OEMs and automotive suppliers onto the same page eliminates redundant engineering efforts and significantly reduces platform development costs. Optimized for virtual cloud development: AAOS SDV was designed ground-up to support virtual cloud development - enabling partners to design, test and validate components in the car well ahead of hardware availability. AAOS SDV already runs on Android Virtual Device (Cuttlefish), and works well with existing Google Cloud integrations such as Google Cloud Horizon, enabling a digital twin solution at scale. A Service-Oriented Architecture: Vehicle functions are developed as topology-agnostic services which are reusable across different architectures. The platform treats the vehicle as a dynamic, connected system. This allows for granular, service-level updates with built-in dependency handling, enabling you to deploy new features over-the-air and create continuous improvement loops. Future-Ready for new services: The platform is designed to simplify the development of telemetry, AI training feedback loops, accelerating the deployment of advanced features for both enterprise fleets and consumer vehicles. Production Ready: Partnering with Renault We are proud to highlight our deep partnership with Renault to underscore the production readiness of the AAOS SDV platform. Renault is currently leveraging the Android Automotive OS SDV platform for its upcoming Renault Trafic e-Tech, “[...] production set to begin in late 2026”. The Renault Trafic e-Tech validates the platform's ability to accelerate development and enable a new generation of software-defined commercial vehicles. Scaling Ready: Partnering with Qualcomm Qualcomm is scaling the Android Automotive OS SDV platform through our strategic partnership. At CES 2026, Qualcomm introduced Snapdragon vSoC on Google Cloud and announced a scaling collaboration to deliver a turnkey, pre-integrated AAOS SDV stack on Snapdragon Digital Chassis platforms. Building an Open AAOS Ecosystem The power of AAOS comes from its vibrant ecosystem. To prepare for the open source release later this year, we are proactively working with leading industry carmakers, suppliers, silicon platforms, and software vendors to ensure that the AAOS SDV platform is well supported and robustly integrated within the automotive ecosystem. We look forward to sharing more updates with our partners in the months ahead.
Posted by Matthew Forsythe, Director Product Management, Android App Safety Android proves you don't have to choose between an open ecosystem and a secure one. Since announcing updated verification requirements, we've worked with the community to ensure these protections are robust yet respectful of platform freedom. We've heard from power users that they want to take educated risks to install software from unverified developers. Today, we're sharing details on a new advanced flow that provides this option. Advanced flow safeguards against coercion Android is built on choice. That is why we’ve developed the advanced flow – an approach that allows power users to maintain the ability to sideload apps from unverified developers. This flow is a one-time process for power users – but it was designed carefully to prevent those in the midst of a scam attempt from being coerced by high pressure tactics to install malicious software. In these scenarios, scammers exploit fear – using threats of financial ruin, legal trouble, or harm to a loved one – to create a sense of extreme urgency. They stay on the phone with victims, coaching them to bypass security warnings and disable security settings before the victim has a chance to think or seek help. According to a 2025 report from the Global Anti-Scam Alliance (GASA), 57% of surveyed adults experienced a scam in the past year, resulting in a global consumer loss of $442 billion. Because the consequences of these scams that use sophisticated social engineering tactics are so severe, we have carefully engineered the advanced flow to provide the critical time and space needed to break the cycle of coercion. How the advanced flow works for users Enable developer mode in system settings: Activating this is simple. This prevents accidental triggers or "one-tap" bypasses often used in high-pressure scams. Confirm you aren't being coached: There is a quick check to make sure that no one is talking you into turning off your security. While power users know how to vet apps, scammers often pressure victims into disabling protections. Restart your phone and reauthenticate: This cuts off any remote access or active phone calls a scammer might be using to watch what you’re doing. Come back after the protective waiting period and verify: There is a one-time, one-day wait and then you can confirm that this is really you who’s making this change with our biometric authentication (fingerprint or face unlock) or device PIN. Scammers rely on manufactured urgency, so this breaks their spell and gives you time to think. Install apps: Once you confirm you understand the risks, you’re all set to install apps from unverified developers, with the option of enabling for 7 days or indefinitely. For safety, you’ll still see a warning that the app is from an unverified developer, but you can just tap “Install Anyway.” A secure Android for every developer We know a "one size fits all" approach doesn't work for our diverse ecosystem. We want to ensure that identity verification isn't a barrier to entry, so we’re providing different paths to fit your specific needs. In addition to the advanced flow we’re building free, limited distribution accounts for students and hobbyists. This allows you to share apps with a small group (up to 20 devices) without needing to provide a government-issued ID or pay a registration fee. This ensures Android remains an open platform for learning and experimentation while maintaining robust protections for the broader community. Limited distribution accounts and advanced flow for users will be available in August before the new developer verification requirements take effect. Visit our website for more details. We look forward to sharing more in the coming days and weeks.
Posted by Ivy Knight, Senior Design Advocate, Android We're thrilled to announce major updates to our design resources, giving you the comprehensive guidance you need to create polished, adaptive Android apps across all form factors! We now have Desktop Experience guidance and a refreshed Android Design Gallery. New Desktop Experience Design Guidance Your users are engaging with Android apps on more diverse devices than ever before—from phones and foldables to laptops and external monitors. A "desktop experience" occurs anytime your app is in a desktop-like mode, typically involving a non-touch input device like a keyboard or mouse, or another display such as a monitor (read more in the connected display announcement). This means designing for larger screens and accommodating additional input states. These new design experiences are meant to maximize productivity for your users with higher information density, multi-tasking capabilities. Dive into desktop experience guidance to help optimize your app with desktop design principles, input interaction guidance, and system UI considerations. The new guidance includes foundational guides where you can learn design principles that make desktop experiences unique, such as how multitasking is at the core of desktop experiences. When your app is in a desktop experience, keep in mind crucial interaction experiences, such as how to best design around unique input interactions, like choosing cursors from system provided cursors. For specialized actions not covered by system icons, consider creating a custom cursor icon, while ensuring it remains easy for users to find on the page. A desktop experience brings more multitasking features, like windowing, so expect your app to take on a variety of dimensions with a header bar. Desktops have much larger screens than mobile, and users typically interact using a mouse which has finer precision than a finger on a touch screen. This means you can present a UI with higher information density so your users can be more productive! Want to get started quickly? Check out the walkthrough to go from mobile to desktop and design along with the updated Adaptive Design lab. For more on criteria that makes a differentiated quality app, read the newly updated adaptive app quality guidelines and adaptive developer guidance. Introducing the Android Design Gallery Looking for inspiration? We've launched the Android Design Gallery! This new resource is a living catalog of inspirational examples across multiple verticals, form factors, and UX patterns. We'll be continually adding new inspirational examples, so check back often to see the latest and greatest in Android design.
Posted by Daniel Santiago Rivera, Software Engineer The first alpha of Room 3.0 has been released! Room 3.0 is a major breaking version of the library that focuses on Kotlin Multiplatform (KMP) and adds support for JavaScript and WebAssembly (WASM) on top of the existing Android, iOS and JVM desktop support. In this blog we outline the breaking changes, the reasoning behind Room 3.0, and the various things you can do to migrate from Room 2.0. Breaking changes Room 3.0 includes the following breaking API changes: Dropping SupportSQLite APIs: Room 3.0 is fully backed by the androidx.sqlite driver APIs. The SQLiteDriver APIs are KMP-compatible and removing Room’s dependency on Android's API simplifies the API surface for Android since it avoids having two possible backends. No more Java code generation: Room 3.0 exclusively generates Kotlin code. This aligns with the evolving Kotlin-first paradigm but also simplifies the codebase and development process, enabling faster iterations. Focus on KSP: We are also dropping support for Java Annotation Processing (AP) and KAPT. Room 3.0 is solely a KSP (Kotlin Symbol Processing) processor, allowing for better processing of Kotlin codebases without being limited by the Java language. Coroutines first: Room 3.0 embraces Kotlin coroutines, making its APIs coroutine-first. Coroutines is the KMP-compatible asynchronous framework and making Room be asynchronous by nature is a critical requirement for supporting web platforms. A new package To prevent compatibility issues with existing Room 2.x implementations and for libraries with transitive dependencies to Room (for example, WorkManager), Room 3.0 resides in a new package which means it also has a new maven group and artifact ids. For example, androidx.room:room-runtime has become androidx.room3:room3-runtime and classes such as androidx.room.RoomDatabase will now be located at android.room3.RoomDatabase. Kotlin and Coroutines First With no more Java code generation, Room 3.0 also requires KSP and the Kotlin compiler even if the codebase interacting with Room is in Java. It is recommended to have a multi-module project where Room usage is concentrated and the Kotlin Gradle Plugin and KSP can be applied without affecting the rest of the codebase. Room 3.0 also requires Coroutines and more specifically DAO functions have to be suspending unless they are returning a reactive type, such as a Flow. Room 3.0 disallows blocking DAO functions. See the Coroutines on Android documentation on getting started integrating Coroutines into your application. Migration to SQLiteDriver APIs With the shift away from SupportSQLite, apps will need to migrate to the SQLiteDriver APIs. This migration is essential to leveraging the full benefits of Room 3.0, including allowing the use of the bundled SQLite library via the BundledSQLiteDriver. You can start migrating to the driver APIs today with Room 2.7.0+. We strongly encourage you to avoid any further usage of SupportSQLite. If you migrate your Room integrations to SQLiteDriver APIs, then the transition to Room 3.0 is easier since the package change mostly involves updating symbol references (imports) and might require minimal changes to call-sites. For a brief overview of the SQLiteDriver APIs, check out the SQLiteDriver APIs documentation. For more details on how to migrate Room to use SQLiteDriver APIs, check out the official documentation to migrate from SupportSQLite. Room SupportSQLite wrapper We understand completely removing SupportSQLite might not be immediately feasible for all projects. To ease this transition, Room 2.8.0, the latest version of the Room 2.0 series, introduced a new artifact called androidx.room:room-sqlite-wrapper. This artifact offers a compatibility API that allows you to convert a RoomDatabase into a SupportSQLiteDatabase, even if the SupportSQLite APIs in the database have been disabled due to a SQLiteDriver being installed. This provides a temporary bridge for developers who need more time to fully migrate their codebase. This artifact continues to exist in Room 3.0 as androidx.room3:room3-sqlite-wrapper to enable the migration to Room 3.0 while still supporting critical SupportSQLite usage. For example, invocations of Database.openHelper.writableDatabase can be replaced by roomDatabase.getSupportWrapper() and a wrapper would be provided even if setDriver() is called on Room’s builder. For more details check out the room-sqlite-wrapper documentation. Room and SQLite Web Support Support for the Kotlin Multiplatform targets JS and WasmJS and brings some of the most significant API changes. Specifically, many APIs in Room 3.0 are suspend functions since proper support for web storage is asynchronous. The SQLiteDriver APIs have also been updated to support the Web and a new web asynchronous driver is available in androidx.sqlite:sqlite-web. It is a Web Worker based driver that enables persisting the database in the Origin private file system (OPFS). For more details on how to set up Room for the Web check out the Room 3.0 release notes. Custom DAO Return Types Room 3.0 introduces the ability to add custom integrations to Room similar to RxJava and Paging. Through a new annotation API called @DaoReturnTypeConverter you can create your own integration such that Room’s generated code becomes accessible at runtime, this enables @Dao functions having their custom return types without having to wait for the Room team to add the support. Existing integrations are migrated to use this functionality and thus will now require for those who rely on it to add the converters to the @Database or @Dao definitions. For example, the Paging converter will be located in the android.room3:room3-paging artifact and it's called PagingSourceDaoReturnTypeConverter. Meanwhile for LiveData the converter is in android.room3:room3-livedata and is called LiveDataReturnTypeConverter. For more details check out the DAO Return Type Converters section in the Room 3.0 release notes. Maintenance mode of Room 2.x Since the development of Room will be focused on Room 3, the current Room 2.x version enters maintenance mode. This means that no major features will be developed but patch releases (2.8.1, 2.8.2, etc.) will still occur with bug fixes and dependency updates. The team is committed to this work until Room 3 becomes stable. Final thoughts We are incredibly excited about the potential of Room 3.0 and the opportunities it unlocks for the Kotlin ecosystem. Stay tuned for more updates as we continue this journey!
Posted by Ajesh R Pai, Developer Relations Engineer & Ben Trengrove, Developer Relations Engineer TikTok is a global short-video platform known for its massive user base and innovative features. The team is constantly releasing updates, experiments, and new features for their users. Faced with the challenge of maintaining velocity while managing technical debt, the TikTok Android team turned to Jetpack Compose. The team wanted to enable faster, higher-quality iteration of product requirements. By leveraging Compose, the team sought to improve engineering efficiency by writing less code and reducing cognitive load, while also achieving better performance and stability. Streamlining complex UI to accelerate developer productivity TikTok pages are often more complex than they appear, containing numerous layered conditional requirements. This complexity often resulted in difficult-to-maintain, sub-optimally structured View hierarchies and excessive View nesting, which caused performance degradation due to an increased number of measure passes. Compose offered a direct solution to this structural problem. Furthermore, Compose’s measurement strategy helps reduce double taxation, making measure performance easier to optimize. To improve developer productivity, TikTok’s central Design System team provides a component library for teams working on different app features. The team observed that Development in Compose is simple; leveraging small composables is highly effective, while incorporating large UI blocks with conditional logic is both straightforward and has minimal overhead. Building a path forward through strategic migration By strategically adopting Jetpack Compose, TikTok was able to stay on top of technical debt, while also continuing to focus on creating great experiences for their users. The ability of Compose to handle conditional logic cleanly and streamline composition allowed the team to achieve up to a 78% reduction in page loading time on new or fully rewritten pages. This improvement was 20–30% in smaller cases, and 70–80% for full rewrites and new features. They also were able to reduce their code size by 58%, when compared to the same feature built in Views. The team has further shared a couple of learnings: TikTok team’s overall strategy was to incrementally migrate specific user journeys. This gave them an opportunity to migrate, confirm measurable benefits, then scale to more screens. They started with using Compose to simplify the overall structure in the QR code feature and saw the improvements. The team later expanded the migration to the Login and Sign-up experiences. The team shared some additional learnings: While checking performance during migration, the TikTok team found that using many small ComposeViews to replace elements inside a single ViewHolder, caused composition overhead. They achieved better results by expanding the migration to use one single ComposeView for the entire ViewHolder. When migrating a Fragment inside ViewPager, which has custom height logic and conditional logic to hide and show ui based on experiments, the performance wasn’t impacted. In this case, migrating the ViewPager to Composable performed better than migrating the Fragment. Jun Shen really likes that Compose "reduces the amount of code required for feature development, improves testability, and accelerates delivery". The team plans to steadily increase Compose adoption, making it their preferred framework in the long term. Jetpack Compose proved to be a powerful solution for improving both their developer experience and production metrics at scale. Get Started with Jetpack Compose Learn more about how Jetpack Compose can help your team.
Posted by Maru Ahues Bouza, PM Director, Games on Google Play Last September, we shared our vision for the future of Google Play Games grounded in a core belief: the best way to drive your game’s success is to deliver a world-class player experience. We launched the Google Play Games Level Up program to recognize and reward great gaming experiences, while providing you with a powerful toolkit and new promotional opportunities to grow your games. The momentum since our announcement has been incredibly positive, with more than 600 million gamers now using Play Games Services every month. Developers are also finding success, with one-third of all game installs on the Play Store now coming from editorially-driven organic discovery. In fact, in 2025, Level Up features have driven over 2.5 billion incremental acquisitions for featured games, in addition to an average uplift of 25% in installs during the featuring windows. Today, we're inviting you to start testing Play Games Sidekick to keep your players in the action, sharing new Play Console updates to optimize your reach, and helping you prepare for our upcoming program milestones. Boost retention and immersion with Play Games Sidekick Play Games Sidekick is a helpful in-game overlay that gives players instant access to relevant gaming information—like rewards, offers, achievements, and quest progress— keeping them immersed while driving higher engagement for developers. It serves as a seamless bridge to the highly visible "You" tab, connecting your game to 160 million monthly active users already engaging there and doubles as an active gaming companion that enhances the player experience with helpful, AI-generated Game Tips. Deep Rock Galactic: Survivor keeps players in the action with Play Games Sidekick Today, Sidekick officially debuts in over 90 games, with the experience expanding to all Level Up titles later this year. But you don’t need to wait for the broader rollout to get your game ready. You can now enable Sidekick through Play Console to preview and test how your players will interact with features like Achievements, Streaks, Play Points Coupons, and Game Tips. Upon completing your testing, be sure to push Sidekick for production to ensure your game meets the Level Up user experience guidelines. Enable Play Games Sidekick in Play Console to begin testing Optimize reach and operations with new Play Console updates We are also rolling out two new Play Console updates to help you optimize your reach and streamline operations: Pre-reg device breakdowns: To aid launch decisions, you can now analyze the device distribution of your pre-registered audience by key device attributes including Android version, RAM and SoC. This enables you to optimize game performance, minimum specs, and marketing spend for the players already waiting for your game. Identify launch-day risks and optimize performance for your players with new pre-registration device breakdowns Real-time feedback: With Level Up+, our tier for high-performing games, qualifying titles can unlock promotional content featuring and tools like deep-links and audience targeting. While submissions must meet Play’s quality guidelines, you no longer have to wait 24 hours to learn about issues. You can now get immediate feedback on quality whenever possible. Your 2026 checklist: Securing your Level Up benefits Today, all games on Google Play qualify for Google Play Games Level Up. However, in order to maintain access to Level Up benefits like Play Points offers, expanded APK size limits, Play Store collections and campaigns consideration, or access to high-visibility surfaces like You tab and Sidekick, you’ll need to ensure your game meets user experience guidelines by their upcoming milestones: By July 2026: Integrate Play Games Sidekick to offer a quick and easy entry point to access rewards, offers, and achievements through an in-game overlay. Implement achievements with Play Games Services, to support authentication with the modern Gamer Profile, and to keep players engaged across the lifespan of your game. By November 2026: Implement cloud save to enable progress sync across devices. Last week, we announced that we’re working on an expanded Level Up program that builds on our successful foundation to further improve gaming experiences. The update will introduce new requirements that will unlock additional benefits like lower service fees. Engaging with the program now ensures your work is strategically aligned with these future updates. We’ll share more details in the coming months. In the meantime, the path to your first program milestone begins today. By prioritizing these user experience guidelines now, you’re investing in the long-term value of your game and ensuring it’s built to thrive for every player. Head over to Play Console to start testing Sidekick and take the next step in your Level Up journey.
Posted by Aurash Mahbod, VP and GM, Games on Google Play Google Play is proud to be the home of over 200,000 games—many of which defined the mobile-first era. But as cross-platform becomes the standard for players, we are evolving our ecosystem to match the scale of your ambitions. In recent years, we focused on elevating Android gaming quality while significantly deepening our support for native PC titles. We know that maximizing your game’s reach across different platforms is complex. The Level Up program serves as your strategic roadmap, helping you prioritize optimizations that drive great experiences on Android. Building on this foundation, we’re doubling down on our investment to make Play the most accessible home for every category of play. We’re adding new tools for paid games and making the PC game discovery to purchase seamless. Keep reading to learn more about how we’re creating a bigger stage for your games. Scale your discovery across mobile and PC platforms Building a bigger stage starts with making your games easier to find—and easier to buy—no matter which device your players prefer. We’re expanding your reach by bringing cross-platform discovery directly to the mobile storefront. With the new PC section in the Games tab, your PC titles gain high visibility placement among our most active mobile players. The PC badge ensures your cross-platform investment is recognized. This creates more opportunities to acquire players on mobile and transition them seamlessly to your high-fidelity PC experience. PC in the Games tab and PC badging expands your game’s reach With ‘buy once play anywhere’ pricing, we’re making it easier to sell your games across different devices. If you choose to opt-in your mobile game for Google Play Games on PC, you can now offer a single price that covers both mobile and PC versions. We’re rolling out this feature in EAP with select games including Brotato: Premium. For PC-only games, players can now complete the full purchase journey on Google Play Games on PC with the same trusted security and privacy standards they expect from Google Play. 'Buy once play anywhere’ pricing to sell your games across devices Lower the purchase barrier with Game Trials To help you convert high-intent buyers with less friction, we’re introducing Game Trials, a feature that enables players to experience your game for a limited time before making a purchase on mobile. Accessible directly from your game’s store listing, Game Trials provides a fast-track for players to start exploring your world with a single tap. Game trials are now in testing with select titles and we’ll roll it out to more titles soon. To ensure this is low maintenance for you, Game Trials is added directly into your Android App Bundle. This enables you to offer a high quality trial without the burden of a separate codebase or a demo version of your app. Play ensures trials are secure and seamless. Game Trials are once per user and protects your game while the trial is active. When it ends, players can purchase your game and keep their progress. We’re also working on tools that will give you more control—such as specifying a custom time limit or an in-game event to conclude the trial. Game Trial for DREDGE to help convert high-intent buyers Diversify your revenue with a dedicated player community on Play Pass Play Pass is another way to diversify revenue and grow your player audience. It has been a strong launchpad for indie hits such as Isle of Arrows, Slay the Spire, and Dead Cells. With Play Pass, you can reach highly dedicated players seeking a more curated gaming experience, free of ads and in-app purchases. To help you deepen engagement, paid titles on Play Pass can now opt in to Google Play Games on PC — making it easy for players to find and play your games on a larger screen. Later this year, you can nominate your game through a streamlined opt-in process directly in Play Console. Drive long term sales with Wishlists and Discounts Wishlists and Discounts are one of the most effective ways to capture player intent and drive long term sales. To support players at every stage of their purchase journey, we’re integrating them directly into Play. Players can save titles to their wishlist and manage them from library settings. To keep your game top-of-mind, players will receive automated notifications for your latest discounts — starting with mobile and expanding soon to PC games. Wishlist and discount notifications drives long term sales, rolling out today How leading studios are finding a new path to success on Play We’re thrilled to welcome Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs to Play [1]. It marks an exciting expansion of our catalog and a step forward in our mission to build a bigger gaming ecosystem for all developers. This growth is fueled by our developer community, whose feedback continues to shape our roadmap and help us better support your success. Sledding Game, 9 Kings, Potion Craft, Moonlight Peaks, and Low Budget Repairs is coming to Play. That mission brings us to GDC and the Independent Games Festival (IGF) Awards [2], where the next generation of games awaits! This year, we’re inviting you to come along for the ride as we go backstage to chat with the finalists and winners, sharing the moments of triumph and the creative stories behind their development. Not joining us at GDC? You can take the next step in your journey to launch your game on Google Play today. 1. Sledding Game, 9 Kings, Potion Craft, and Moonlight Peaks are coming to Google Play in 2026. Low Budget Repairs is scheduled for release in 2027. [Back] 2. Independent Games Festival (IGF) Awards is hosted by Game Developers Conference (GDC) and requires a valid GDC pass for entry. [Back]
Posted by Yabin Cui, Software Engineer We are the Android LLVM toolchain team. One of our top priorities is to improve Android performance through optimization techniques in the LLVM ecosystem. We are constantly searching for ways to make Android faster, smoother, and more efficient. While much of our optimization work happens in userspace, the kernel remains the heart of the system. Today, we’re excited to share how we are bringing Automatic Feedback-Directed Optimization (AutoFDO) to the Android kernel to deliver significant performance wins for users. What is AutoFDO? During a standard software build, the compiler makes thousands of small decisions, such as whether to inline a function and which branch of a conditional is likely to be taken, based on static code hints.While these heuristics are useful, they don't always accurately predict code execution during real-world phone usage. AutoFDO changes this by using real-world execution patterns to guide the compiler. These patterns represent the most common instruction execution paths the code takes during actual use, captured by recording the CPU's branching history. While this data can be collected from fleet devices, for the kernel we synthesize it in a lab environment using representative workloads, such as running the top 100 most popular apps. We use a sampling profiler to capture this data, identifying which parts of the code are 'hot' (frequently used) and which are 'cold'. When we rebuild the kernel with these profiles, the compiler can make much smarter optimization decisions tailored to actual Android workloads. To understand the impact of this optimization, consider these key facts: On Android, the kernel accounts for about 40% of CPU time. We are already using AutoFDO to optimize native executables and libraries in the userspace, achieving about 4% cold app launch improvement and a 1% boot time reduction. Real-World Performance Wins We have seen impressive improvements across key Android metrics by leveraging profiles from controlled lab environments. These profiles were collected using app crawling and launching, and measured on Pixel devices across the 6.1, 6.6, and 6.12 kernels. The most noticeable improvements are listed below. Details on the AutoFDO profiles for these kernel versions can be found in the respective Android kernel repositories for android16-6.12 and android15-6.6 kernels. These aren't just theoretical numbers. They translate to a snappier interface, faster app switching, extended battery life, and an overall more responsive device for the end user. How It Works: The Pipeline Our deployment strategy involves a sophisticated pipeline to ensure profiles stay relevant and performance remains stable. Step 1: Profile Collection While we rely on our internal test fleet to profile userspace binaries, we shifted to a controlled lab environment for the Generic Kernel Image (GKI). Decoupling profiling from the device release cycle allows for flexible, immediate updates independent of deployed kernel versions. Crucially, tests confirm that this lab-based data delivers performance gains comparable to those from real-world fleets. Tools & Environment: We flash test devices with the latest kernel image and use simpleperf to capture instruction execution streams. This process relies on hardware capabilities to record branching history, specifically utilizing ARM Embedded Trace Extension (ETE) and ARM Trace Buffer Extension (TRBE) on Pixel devices. Workloads: We construct a representative workload using the top 100 most popular apps from the Android App Compatibility Test Suite (C-Suite). To capture the most accurate data, we focus on: App Launching: Optimizing for the most visible user delays AI-Driven App Crawling: Simulating contiguous, evolving user interactions System-Wide Monitoring: Capturing not only foreground app activities, but also critical background workloads and inter-process communications Validation: This synthesized workload shows an 85% similarity to execution patterns collected from our internal fleet. Targeted Data: By repeating these tests sufficiently, we capture high-fidelity execution patterns that accurately represent real-world user interaction with the most popular applications. Furthermore, this extensible framework allows us to seamlessly integrate additional workloads and benchmarks to broaden our coverage. Step 2: Profile Processing We post-process the raw trace data to ensure it is clean, effective, and ready for the compiler. Aggregation: We consolidate data from multiple test runs and devices into a single system view. Conversion: We convert raw traces into the AutoFDO profile format, filtering out unwanted symbols as needed. Profile Trimming: We trim profiles to remove data for "cold" functions, allowing them to use standard optimization. This prevents regressions in rarely used code and avoids unnecessary increases in binary size. Step 3: Profile Testing Before deployment, profiles undergo rigorous verification to ensure they deliver consistent performance wins without stability risks. Profile & Binary Analysis: We strictly compare the new profile's content (including hot functions, sample counts, and profile size) against previous versions. We also use the profile to build a new kernel image, analyzing binaries to ensure that changes to the text section are consistent with expectations. Performance Verification: We run targeted benchmarks on the new kernel image. This confirms that it maintains the performance improvements established by previous baselines. Continuous Updates Code naturally "drifts" over time, so a static profile would eventually lose its effectiveness. To maintain peak performance, we run the pipeline continuously to drive regular updates: Regular Refresh: We refresh profiles in Android kernel LTS branches ahead of each GKI release, ensuring every build includes the latest profile data. Future Expansion: We are currently delivering these updates to the android16-6.12 and android15-6.6 branches and will expand support to newer GKI versions, such as the upcoming android17-6.18. Ensuring Stability A common question with profile-guided optimization is whether it introduces stability risks. Because AutoFDO primarily influences compiler heuristics, such as function inlining and code layout, rather than altering the source code's logic, it preserves the functional integrity of the kernel. This technology has already been proven at scale, serving as a standard optimization for Android platform libraries, ChromeOS, and Google’s own server infrastructure for years. To further guarantee consistent behavior, we apply a "conservative by default" strategy. Functions not captured in our high-fidelity profiles are optimized using standard compiler methods. This ensures that the "cold" or rarely executed parts of the kernel behave exactly as they would in a standard build, preventing performance regressions or unexpected behaviors in corner cases. Looking Ahead We are currently deploying AutoFDO across the android16-6.12 and android15-6.6 branches. Beyond this initial rollout, we see several promising avenues to further enhance the technology: Expanded Reach: We look forward to deploying AutoFDO profiles to newer GKI kernel versions and additional build targets beyond the current aarch64 support. GKI Module Optimization: Currently, our optimization is focused on the main kernel binary (vmlinux). Expanding AutoFDO to GKI modules could bring performance benefits to a larger portion of the kernel subsystem. Vendor Module Support: We are also interested in supporting AutoFDO for vendor modules built using the Driver Development Kit (DDK). With support already available in our build system (Kleaf) and profiling tools (simpleperf), this allows vendors to apply these same optimization techniques to their specific hardware drivers. Broader Profile Coverage: There is potential to collect profiles from a wider range of Critical User Journeys (CUJs) to optimize them. By bringing AutoFDO to the Android kernel, we’re ensuring that the very foundation of the OS is optimized for the way you use your device every day.
Posted by Mayuri Khinvasara Khabya, Developer Relations Engineer (LinkedIn and X) In the dynamic world of social media, user attention is won or lost quickly. Meta apps (Facebook and Instagram) are among the world's largest social platforms and serve billions of users globally. For Meta, delivering videos seamlessly isn't just a feature, it's the core of their user experience. Short-form videos, particularly Facebook Newsfeed and Instagram Reels, have become a primary driver of engagement. They enable creative expression and rapid content consumption; connecting and entertaining people around the world. This blog post takes you through the journey of how Meta transformed video playback for billions by delivering true instant playback. The latency gap in short form videos Short-form videos lead to highly fast paced interactions as users quickly scroll through their feeds. Delivering a seamless transition between videos in an ever-changing feed introduces unique hurdles for instantaneous playback. Hence we need solutions that go beyond traditional disk caching and standard reactive playback strategies. The path forward with Media3 PreloadManager To address the shifts in consumption habits from rise in short form content and the limitations of traditional long form playback architecture, Jetpack Media3 introduced PreloadManager. This component allows developers to move beyond disk caching, offering granular control and customization to keep media ready in memory before the user hits play. Read this blog series to understand technical details about media playback with PreloadManager. How Meta achieved true instant playback Existing Complexities Previously, Meta used a combination of warmup (to get players ready) and prefetch (to cache content on disk) for video delivery. While these methods helped improve network efficiency, they introduced significant challenges. Warmup required instantiating multiple player instances sequentially, which consumed significant memory and limited preloading to only a few videos. This high resource demand meant that a more scalable robust solution could be applied to deliver the instant playback expected on modern, fast-scrolling social feeds. Integrating Media3 PreloadManager To achieve truly instant playback, Meta's Media Foundation Client team integrated the Jetpack Media3 PreloadManager into Facebook and Instagram. They chose the DefaultPreloadManager to unify their preloading and playback systems. This integration required refactoring Meta's existing architecture to enable efficient resource sharing between the PreloadManager and ExoPlayer instances. This strategic shift provided a key architectural advantage: the ability to parallelize preloading tasks and manage many videos using a single player instance. This dramatically increased preloading capacity while eliminating the high memory complexities of their previous approach. Optimization and Performance Tuning The team then performed extensive testing and iterations to optimize performance across Meta's diverse global device ecosystem. Initial aggressive preloading sometimes caused issues, including increased memory usage and scroll performance slowdowns. To solve this, they fine-tuned the implementation by using careful memory measurements, considering device fragmentation, and tailoring the system to specific UI patterns. Fine tuning implementation to specific UI patterns Meta applied different preloading strategies and tailored the behavior to match the specific UI patterns of each app: Facebook Newsfeed: The UI prioritizes the video currently coming into view. The manager preloads only the current video to ensure it starts the moment the user pauses their scroll. This "current-only" focus minimizes data and memory footprints in an environment where users may see many static posts between videos. While the system is presently designed to preload just the video in view, it can be adjusted to also preload upcoming (future) videos. Instagram Reels: This is a pure video environment where users swipe vertically. For this UI, the team implemented an "adjacent preload" strategy. The PreloadManager keeps the videos immediately after the current Reel ready in memory. This bi-directional approach ensures that whether a user swipes up or down, the transition remains instant and smooth. The result was a dramatic improvement in the Quality of Experience (QoE) including improvements in Playback Start and Time to First Frame for the user. Scaling for a diverse global device ecosystem Scaling a high-performance video stack across billions of devices requires more than just aggressive preloading; it requires intelligence. Meta faced initial challenges with memory pressure and scroll lag, particularly on mid-to-low-end hardware. To solve this, they built a Device Stress Detection system around the Media3 implementation. The apps now monitor I/O and CPU signals in real-time. If a device is under heavy load, preloading is paused to prioritize UI responsiveness. This device-aware optimization ensures that the benefit of instant playback doesn't come at the cost of system stability, allowing even users on older hardware to experience a smoother, uninterrupted feed. Architectural wins and code health Beyond the user-facing metrics, the migration to Media3 PreloadManageroffered long-term architectural benefits. While the integration and tuning process needed multiple iterations to balance performance, the resulting codebase is more maintainable. The team found that the PreloadManager API integrated cleanly with the existing Media3 ecosystem, allowing for better resource sharing. For Meta, the adoption of Media3 PreloadManager was a strategic investment in the future of video consumption. By adopting preloading and adding device-intelligent gates, they successfully increased total watch time on their apps and improved the overall engagement of their global community. Resulting impact on Instagram and Facebook The proactive architecture delivered immediate and measurable improvements across both platforms. Facebook experienced faster playback starts, decreased playback stall rates and a reduction in bad sessions (like rebuffering, delayed start time, lower quality,etc) which overall resulted in higher watch time. Instagram saw faster playback starts and an increase in total watch time. Eliminating join latency (the interval from the user's action to the first frame display) directly increased engagement metrics. The fewer interruptions due to reduced buffering meant users watched more content, which showed through engagement metrics. Key engineering learnings at scale As media consumption habits evolve, the demand for instant experiences will continue to grow. Implementing proactive memory management and optimizing for scale and device diversity ensures your application can meet these expectations efficiently. Prioritize intelligent preloading Focus on delivering a reliable experience by minimizing stutters and loading times through preloading. Rather than simple disk caching, leveraging memory-level preloading ensures that content is ready the moment a user interacts with it. Align your implementation with UI patterns Customize preloading behavior as per your apps’s UI. For example, use a "current-only" focus for mixed feeds like Facebook to save memory, and an "adjacent preload" strategy for vertical environments like Instagram Reels. Leverage Media3 for long-term code health Integrating with Media3 APIs rather than a custom caching solution allows for better resource sharing between the player and the PreloadManager, enabling you to manage multiple videos with a single player instance. This results in a future-proof codebase that is easier for engineering teams to not only maintain and optimize over time but also benefit from the latest feature updates. Implement device aware optimizations Broaden your market reach by testing on various devices, including mid-to-low-end models. Use real-time signals like CPU, memory, and I/O to adapt features and resource usage dynamically. Learn More To get started and learn more, visit Explore the Media3 PreloadManager documentation. Read the blog series for advanced technical and implementation details. Part 1: Introducing Preloading with Media3 Part 2: A deep dive into Media3's PreloadManager Check out the sample app to see preloading in action. Now you know the secrets for instant playback. Go try them out!
Posted by Matthew McCullough, VP of Product Management, Android Developer We want to make it faster and easier for you to build high-quality Android apps, and one way we’re helping you be more productive is by putting AI at your fingertips. We know you want AI that truly understands the nuances of the Android platform, which is why we’ve been measuring how LLMs perform Android development tasks. Today we released the first version of Android Bench, our official leaderboard of LLMs for Android development. Our goal is to provide model creators with a benchmark to evaluate LLM capabilities for Android development. By establishing a clear, reliable baseline for what high quality Android development looks like, we’re helping model creators identify gaps and accelerate improvements—which empowers developers to work more efficiently with a wider range of helpful models to choose for AI assistance—which ultimately will lead to higher quality apps across the Android ecosystem. Designed with real-world Android development tasks We created the benchmark by curating a task set against a range of common Android development areas. It is composed of real challenges of varying difficulty, sourced from public GitHub Android repositories. Scenarios include resolving breaking changes across Android releases, domain-specific tasks like networking on wearables, and migrating to the latest version of Jetpack Compose, to name a few. Each evaluation attempts to have an LLM fix the issue reported in the task, which we then verify using unit or instrumentation tests. This model-agnostic approach allows us to measure a model’s ability to navigate complex codebases, understand dependencies, and solve the kind of problems you encounter every day. We validated this methodology with several LLM makers, including JetBrains. “Measuring AI’s impact on Android is a massive challenge, so it’s great to see a framework that’s this sound and realistic. While we’re active in benchmarking ourselves, Android Bench is a unique and welcome addition. This methodology is exactly the kind of rigorous evaluation Android developers need right now.” - Kirill Smelov, Head of AI Integrations at JetBrains. The first Android Bench results For this initial release, we wanted to purely measure model performance and not focus on agentic or tool use. The models were able to successfully complete 16-72% of the tasks. This is a wide range that demonstrates some LLMs already have a strong baseline for Android knowledge, while others have more room for improvement. Regardless of where the models are at now, we’re anticipating continued improvement as we encourage LLM makers to enhance their models for Android development. The LLM with the highest average score for this first release is Gemini 3.1 Pro, followed closely by Claude Opus 4.6. You can try all of the models we evaluated for AI assistance for your Android projects by using API keys in the latest stable version of Android Studio. Providing developers and LLM makers with transparency We value an open and transparent approach, so we made our methodology, dataset, and test harness publicly available on GitHub. One challenge for any public benchmark is the risk of data contamination, where models may have seen evaluation tasks during their training process. We have taken measures to ensure our results reflect genuine reasoning rather than memorization or guessing, including a thorough manual review of agent trajectories, or the integration of a canary string to discourage training. Looking ahead, we will continue to evolve our methodology to preserve the integrity of the dataset, while also making improvements for future releases of the benchmark—for example, growing the quantity and complexity of tasks. We’re looking forward to how Android Bench can improve AI assistance long-term. Our vision is to close the gap between concept and quality code. We're building the foundation for a future where no matter what you imagine, you can build it on Android.
Posted by Alice Yuan, Senior Developer Relations Engineer In recognition that excessive battery drain is top of mind for Android users, Google has been taking significant steps to help developers build more power-efficient apps. On March 1st, 2026, Google Play Store began rolling out the wake lock technical quality treatments to improve battery drain. This treatment will roll out gradually to impacted apps over the following weeks. Apps that consistently exceed the "Excessive Partial Wake Lock" threshold in Android vitals may see tangible impacts on their store presence, including warnings on their store listing and exclusion from discovery surfaces such as recommendations. Users may see a warning on your store listing if your app exceeds the bad behavior threshold. This initiative elevated battery efficiency to a core vital metric alongside stability metrics like crashes and ANRs. The "bad behavior threshold" is defined as holding a non-exempted partial wake lock for at least two hours on average while the screen is off in more than 5% of user sessions in the past 28 days. A wake lock is exempted if it is a system held wake lock that offers clear user benefits that cannot be further optimized, such as audio playback, location access, or user-initiated data transfer. You can view the full definition of excessive wake locks in our Android vitals documentation. As part of our ongoing initiative to improve battery life across the Android ecosystem, we have analyzed thousands of apps and how they use partial wake locks. While wake locks are sometimes necessary, we often see apps holding them inefficiently or unnecessarily, when more efficient solutions exist. This blog will go over the most common scenarios where excessive wake locks occur and our recommendations for optimizing wake locks. We have already seen measurable success from partners like WHOOP, who leveraged these recommendations to optimize their background behavior. Using a foreground service vs partial wake locks We’ve often seen developers struggle to understand the difference between two concepts when doing background execution: foreground service and partial wake locks. A foreground service is a lifecycle API that signals to the system that an app is performing user-perceptible work and should not be killed to reclaim memory, but it does not automatically prevent the CPU from sleeping when the screen turns off. In contrast, a partial wake lock is a mechanism specifically designed to keep the CPU running even while the screen is off. While a foreground service is often necessary to continue a user action, a manual acquisition of a partial wake lock is only necessary in conjunction with a foreground service for the duration of the CPU activity. In addition, you don't need to use a wake lock if you're already utilizing an API that keeps the device awake. Refer to the flow chart in Choose the right API to keep the device awake to ensure you have a strong understanding of what tool to use to avoid acquiring a wake lock in scenarios where it’s not necessary. Third party libraries acquiring wake locks It is common for an app to discover that it is flagged for excessive wake locks held by a third-party SDK or system API acting on its behalf. To identify and resolve these wake locks, we recommend the following steps: Check Android vitals: Find the exact name of the offending wake lock in the excessive partial wake locks dashboard. Cross-reference this name with the Identify wake locks created by other APIs guidance to see if it was created by a known system API or Jetpack library. If it is, you may need to optimize your usage of the API and can refer to the recommended guidance. Capture a System Trace: If the wake lock cannot be easily identified, reproduce the wake lock issue locally using a system trace and inspect it with the Perfetto UI. You can learn more about how to do this in the Debugging other types of excessive wake locks section of this blog post. Evaluate Alternatives: If an inefficient third-party library is responsible and cannot be configured to respect battery life, consider communicating the issue with the SDK's owners, finding an alternative SDK or building the functionality in-house. Common wake lock scenarios Below is a breakdown of some of the specific use cases we have reviewed, along with the recommended path to optimize your wake lock implementation. User-Initiated Upload or Download Example use cases: Video streaming apps where the user triggers a download of a large file for offline access. Media backup apps where the user triggers uploading their recent photos via a notification prompt. How to reduce wake locks: Do not acquire a manual wake lock. Instead, use the User-Initiated Data Transfer (UIDT) API. This is the designated path for long running data transfer tasks initiated by the user, and it is exempted from excessive wake lock calculations. One-Time or Periodic Background Syncs Example use cases: An app performs periodic background syncs to fetch data for offline access. Pedometer apps that fetch step count periodically. How to reduce wake locks: Do not acquire a manual wake lock. Use WorkManager configured for one-time or periodic work. WorkManager respects system health by batching tasks and has a minimum periodic interval (15 minutes), which is generally sufficient for background updates. If you identify wake locks created by WorkManager or JobScheduler with high wake lock usage, it may be because you’ve misconfigured your worker to not complete in certain scenarios. Consider analyzing the worker stop reasons, particularly if you’re seeing high occurrences of STOP_REASON_TIMEOUT. workManager.getWorkInfoByIdFlow(syncWorker.id) .collect { workInfo -> if (workInfo != null) { val stopReason = workInfo.stopReason logStopReason(syncWorker.id, stopReason) } } In addition to logging worker stop reasons, refer to our documentation on debugging your workers. Also, consider collecting and analyzing system traces to understand when wake locks are acquired and released. Finally, check out our case study with WHOOP, where they were able to discover an issue with configuration of their workers and reduce their wake lock impact significantly. Bluetooth Communication Example use cases: Companion device app prompts the user to pair their Bluetooth external device. Companion device app listens for hardware events on an external device and user visible change in notification. Companion device app’s user initiates a file transfer between the mobile and bluetooth device. Companion device app performs occasional firmware updates to an external device via Bluetooth. How to reduce wake locks: Use companion device pairing to pair Bluetooth devices to avoid acquiring a manual wake lock during Bluetooth pairing. Consult the Communicate in the background guidance to understand how to do background Bluetooth communication. Using WorkManager is often sufficient if there is no user impact to a delayed communication. If a manual wake lock is deemed necessary, only hold the wake lock for the duration of Bluetooth activity or processing of the activity data. Location Tracking Example use cases: Fitness apps that cache location data for later upload such as plotting running routes Food delivery apps that pull location data at a high frequency to update progress of delivery in a notification or widget UI. How to reduce wake locks: Consult our guidance to Optimize location usage. Consider implementing timeouts, leveraging location request batching, or utilizing passive location updates to ensure battery efficiency. When requesting location updates using the FusedLocationProvider or LocationManager APIs, the system automatically triggers a device wake-up during the location event callback. This brief, system-managed wake lock is exempted from excessive partial wake lock calculations. Avoid acquiring a separate, continuous wake lock for caching location data, as this is redundant. Instead, persist location events in memory or local storage and leverage WorkManager to process them at periodic intervals. override fun onCreate(savedInstanceState: Bundle?) { locationCallback = object : LocationCallback() { override fun onLocationResult(locationResult: LocationResult?) { locationResult ?: return // System wakes up CPU for short duration for (location in locationResult.locations){ // Store data in memory to process at another time } } } } High Frequency Sensor Monitoring Example use cases: Pedometer apps that passively collect steps, or distance traveled. Safety apps that monitor the device sensors for rapid changes in real time, to provide features such as crash detection or fall detection. How to reduce wake locks: If using SensorManager, reduce usage to periodic intervals and only when the user has explicitly granted access through a UI interaction. High frequency sensor monitoring can drain the battery heavily due to the number of CPU wake-ups and processing that occurs. If you’re tracking step counts or distance traveled, rather than using SensorManager, leverage Recording API or consider utilizing Health Connect to access historical and aggregated device step counts to capture data in a battery-efficient manner. If you’re registering a sensor with SensorManager, specify a maxReportLatencyUs of 30 seconds or more to leverage sensor batching to minimize the frequency of CPU interrupts. When the device is subsequently woken by another trigger such as a user interaction, location retrieval, or a scheduled job, the system will immediately dispatch the cached sensor data. val accelerometer = sensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER) sensorManager.registerListener(this, accelerometer, samplingPeriodUs, // How often to sample data maxReportLatencyUs // Key for sensor batching ) If your app requires both location and sensor data, synchronize their event retrieval and processing. By piggybacking sensor readings onto the brief wake lock the system holds for location updates, you avoid needing a wake lock to keep the CPU awake. Use a worker or a short-duration wake lock to handle the upload and processing of this combined data. Remote Messaging Example use cases: Video or sound monitoring companion apps that need to monitor events that occur on an external device connected using a local network. Messaging apps that maintain a network socket connection with the desktop variant. How to reduce wake locks: If the network events can be processed on the server side, use FCM to receive information on the client. You may choose to schedule an expedited worker if additional processing of FCM data is required. If events must be processed on the client side via a socket connection, a wake lock is not needed to listen for event interrupts. When data packets arrive at the Wi-Fi or Cellular radio, the radio hardware triggers a hardware interrupt in the form of a kernel wake lock. You may then choose to schedule a worker or acquire a wake lock to process the data. For example, if you’re using ktor-network to listen for data packets on a network socket, you should only acquire a wake lock when packets have been delivered to the client and need to be processed. val readChannel = socket.openReadChannel() while (!readChannel.isClosedForRead) { // CPU can safely sleep here while waiting for the next packet val packet = readChannel.readRemaining(1024) if (!packet.isEmpty) { // Data Arrived: The system woke the CPU and we should keep it awake via manual wake lock (urgent) or scheduling a worker (non-urgent) performWorkWithWakeLock { val data = packet.readBytes() // Additional logic to process data packets } } } Summary By adopting these recommended solutions for common use cases like background syncs, location tracking, sensor monitoring and network communication, developers can work towards reducing unnecessary wake lock usage. To continue learning, read our other technical blog post or watch our technical video on how to discover and debug wake locks: Optimize your app battery using Android vitals wake lock metric. Also, consult our updated wakelock documentation. To help us continue improving our technical resources, please share any additional feedback on our guidance in our documentation feedback survey.
Posted by Breana Tate, Developer Relations Engineer, Mayank Saini, Senior Android Engineer, Sarthak Jagetia, Senior Android Engineer and Manmeet Tuteja, Android Engineer II Building an Android app for a wearable means the real work starts when the screen turns off. WHOOP helps members understand how their body responds to training, recovery, sleep, and stress, and for the many WHOOP members on Android, reliable background syncing and connectivity are what make those insights possible. Earlier this year, Google Play released a new metric in Android vitals: Excessive partial wake locks. This metric measures the percentage of user sessions where cumulative, non-exempt wake lock usage exceeds 2 hours in a 24-hour period. The aim of this metric is to help you identify and address possible sources of battery drain, which is crucial for delivering a great user experience. Beginning March 1, 2026, apps that continue to not meet the quality threshold may be excluded from Google Play discovery surfaces. A warning may also be placed on the Google Play Store listing, indicating the app might use more battery than expected. According to Mayank Saini, Senior Android Engineer at WHOOP, this “presented the team with an opportunity to raise the bar on Android efficiency,” after Android vitals flagged the app’s excessive partial wake lock % as 15%—which exceeded the recommended 5% threshold. The team viewed the Android vitals metric as a clear signal that their background work was holding the CPU awake longer than necessary. Resolving this would allow them to continue to deliver a great user experience while simultaneously decreasing wasted background time and maintaining reliable and timely Bluetooth connectivity and syncing. Identifying the issue To figure out where to get started, the team first turned to Android vitals for more insight into which wake locks were affecting the metric. By consulting the Android vitals excessive partial wake locks dashboard, they were able to identify the biggest contributor to excessive partial wake locks as one of their WorkManager workers (identified in the dashboard as androidx.work.impl.background.systemjob.SystemJobService). To support the WHOOP “always-on experience”, the app uses WorkManager for background tasks like periodic syncing and delivering recurring updates to the wearable. While the team was aware that WorkManager acquires a wake lock while executing tasks in the background, they previously did not have visibility into how all of their background work (beyond just WorkManager) was distributed until the introduction of the excessive partial wake locks metric in Android vitals. With the dashboard identifying WorkManager as the main contributor, the team was then able to focus their efforts on identifying which of their workers was contributing the most and work towards resolving the issue. Making use of internal metrics and data to better narrow down the cause WHOOP already had internal infrastructure set up to monitor WorkManager metrics. They periodically monitor: Average Runtime: For how long does the worker run? Timeouts: How often is the worker timing out instead of completing? Retries: How often does the worker retry if the work timed out or failed? Cancellations: How often was the work cancelled? Tracking more than just worker successes and failures gives the team visibility into their work’s efficiency. The internal metrics flagged high average runtime for a select few workers, enabling them to narrow the investigation down even further. In addition to their internal metrics, the team also used Android Studio’s Background Task Inspector to inspect and debug the workers of interest, with a specific focus on associated wake locks, to align with the metric flagged in Android vitals. Investigation: Distinguishing between worker variants WHOOP uses both one-time and periodic scheduling for some workers. This allows the app to reuse the same Worker logic for identical tasks with the same success criteria, differing only in timing. Using their internal metrics made it possible to narrow their search to a specific worker, but they couldn't tell if the bug occurred when the worker was one-time, periodic, or both. So, they rolled out an update to use WorkManager’s setTraceTag method to distinguish between the one-time and periodic variants of the same Worker. This extra detail would allow them to definitively identify which Worker variant (periodic or one-time) was contributing the most to sessions with excessive partial wake locks. However, the team was surprised when the data revealed that neither variant appeared to be contributing more than the other. Manmeet Tuteja, Android Engineer II at WHOOP said “that split also helped us confirm the issue was happening in both variants, which pointed away from scheduling configuration and toward a shared business logic problem inside the worker implementation.” Diving deeper on worker behavior and fixing the root cause With the knowledge that they needed to take a look at logic within the worker, the team re-examined worker behavior for the workers that had been flagged during their investigation. Specifically, they were looking for instances in which work may have been getting stuck and not completing. All of this culminated in finding the root cause of the excessive wake locks: A CoroutineWorker that was designed to wait for a connection to the WHOOP sensor before proceeding. If the work started with no sensor connected, whoopSensorFlow–which indicates if the sensor is connected– was null. The SensorWorker didn’t treat this as an early-exit condition and kept running, effectively waiting indefinitely for a connection. As a result, WorkManager held a partial wake lock until the work timed out, leading to high background wake lock usage and frequent, unwanted rescheduling of the SensorWorker. To address this, the WHOOP team updated the worker logic to check the connection status before attempting to execute the core business logic. If the sensor isn’t available, the worker exits, avoiding a timeout scenario and releasing the wake lock. The following code snippet shows the solution: class SensorWorker(appContext: Context, params: WorkerParameters): CoroutineWorker(appContext, params) { override suspend fun doWork(): Result { ... // Check the sensor state and perform work or return failure return whoopSensorFlow.replayCache .firstOrNull() ?.let { cachedData -> processSensorData(cachedData) Result.success() } ?: run { Result.failure() } } Achieving a 90% decrease in sessions with excessive partial wake locks After rolling out the fix, the team continued to monitor the Android vitals dashboard to confirm the impact of the changes. Ultimately, WHOOP saw their excessive partial wake lock percentage drop from 15% to less than 1% just 30 days after implementing the changes to their Worker. As a result of the changes, the team has seen fewer instances of work timing out without completing, resulting in lower average runtimes. The WHOOP team’s advice to other developers that want to improve their background work’s efficiency: Want to dive deeper into the details and hear more insights from the developers? Check out the WHOOP team's blog. Get Started If you’re interested in trying to reduce your app’s excessive partial wake locks or trying to improve worker efficiency, view your app’s excessive partial wake locks metric in Android vitals, and review the wake locks documentation for more best practices and debugging strategies.
Posted by Sameer Samat, President of Android Ecosystem Android has always driven innovation in the industry through its unique flexibility and openness. At this important moment, we want to continue leading the way in how developers distribute their apps and games to people on billions of devices across many form factors. A modern platform must be flexible, providing developers and users with choice and openness as well as a safe experience. Today we are announcing substantial updates that evolve our business model and build on our long history of openness globally. We’re doing that in three ways: more billing options, a program for registered app stores, and lower fees and new programs for developers. Expanded billing choice on Google Play for users and developers Google Play is giving developers even more billing choice and freedom in how they handle transactions. Mobile developers will have the option to use their own billing systems in their app alongside Google Play’s billing, or they can guide users outside of their app to their own websites for purchases. Our goal is to offer this flexibility in a way that maximizes choice and safety for users. Leading the way in store choice We’re introducing a program that makes sideloading qualified app stores even easier. Our new Registered App Stores program will provide a more streamlined installation flow for Android app stores that meet certain quality and safety benchmarks. Once this change has rolled out, app stores that choose to participate in this optional program will have registered with us and so users who sideload them will have a more simplified installation flow (see graphic below). If a store chooses not to participate, nothing changes for them and they retain the same experience as any other sideloaded app on Android. This gives app stores more ways to reach users and gives users more ways to easily and safely access the apps and games they love. This Registered App Store program will begin outside of the US first, and we intend to bring it to the US as well, subject to court approval. Lower pricing and new programs to support developers Google Play’s fees are already the lowest among major app stores, and today we are taking this even further by introducing a new business model that decouples fees for using our billing system and introduces new, lower service fees. Once this rolls out: Billing: For those developers who choose to use Google Play’s billing system, they will be charged a market-specific rate separate from the service fee. In the European Economic Area (EEA), UK, and US that rate will be 5%. Service Fees: For new installs (first time installs from users after the new fees are launched in a region), we are reducing the in-app purchase (IAP) service fee to 20%. We are launching an Apps Experience Program and revamping our Google Play Games Level Up program to incentivize building great software experiences across Android form factors associated with clear quality benchmarks and enhanced user benefits. Those developers who choose to participate in these programs will have even lower rates. Participating IAP developers will have a 20% service fee for transactions from existing installs and a 15% fee on transactions from new app installs. Our service fee for recurring subscriptions will be 10%. Rollout timelines This is a significant evolution, and we plan to share additional details in the coming months. To make sure we have enough time to build the necessary technical infrastructure, enable a seamless transition for developers, and ensure alignment with local regulations, these updated fees will roll out on the following staggered schedule: By June 30: EEA, the United Kingdom and the US. By September 30: Australia By December 31: Korea and Japan By September 30, 2027: The updates will reach the rest of the world. We will also launch the updated Google Play Games Level Up program and new App Experience program by September 30 for EEA, UK, US, and Australia and then it will roll out in line with the rest of the schedule above. We plan to launch Registered App Stores with a version of a major Android release by the end of the year. Resolving disputes with Epic Games With these updates, we have also resolved our disputes worldwide with Epic Games. We believe these changes will make for a stronger Android ecosystem with even more successful developers and higher-quality apps and games available across more form factors for everyone. We look forward to our continued work with the developer community to build the next generation of digital experiences.
Posted by Francesco Romano, Senior Developer Relations Engineer on Android We are excited to announce a major milestone in bringing mobile and desktop computing closer together on Android: connected display support has reached general availability with the Android 16 QPR3 release! As shown at Google I/O 2025, connected displays allow users to connect their Android devices to an external monitor and instantly access a desktop windowing environment. Apps can be used in free-form or maximized windows and users can multitask just like they would on a desktop OS. Google and Samsung have collaborated to bring a seamless and powerful desktop windowing experience to devices across the Android ecosystem running Android 16 while connected to an external display. This is now generally available on supported devices* to users who can connect their supported Pixel and Samsung phones to external monitors, enabling new opportunities for building more engaging and more productive app experiences that adapt across form factors. How does it work? When a supported Android phone or foldable is connected to an external display, a new desktop session starts on the connected display. The experience on the connected display is similar to the experience on a desktop, including a taskbar that shows active apps and lets users pin apps for quick access. Users are able to run multiple apps side by side simultaneously in freely resizable windows on the connected display. Phone connected to an external display with a desktop session on the display while the phone maintains its own state. When a device that supports desktop windowing (such as a tablet like the Samsung Galaxy Tab S11) is connected to an external display, the desktop session is extended across both displays, unlocking an even more expansive workspace. The two displays then function as one continuous system, allowing app windows, content, and the cursor to move freely between the displays. Tablet connected to an external display, extending the desktop session across both displays. Why does it matter? In the Android 16 QPR3 release, we finalized the windowing behaviors, taskbar interactions, and input compatibility (mouse and keyboard) that define the connected display experience. We also included compatibility treatments to scale windows and avoid app restarts when switching displays. If your app is built with adaptive design principles, it will automatically have the desktop look and feel, and users will feel right at home. If the app is locked to portrait or assumes a touch-only interface, now is the time to modernize. In particular, pay attention to these key best practices for optimal app experiences on connected displays: Don't assume a constant Display object: The Display object associated with your app's context can change when an app window is moved to an external display or if the display configuration changes. Your app should gracefully handle configuration change events and query display metrics dynamically rather than caching them. Account for density configuration changes: External displays can have vastly different pixel densities than the primary device screen. Ensure your layouts and resources adapt correctly to these changes to maintain UI clarity and usability. Use density-independent pixels (dp) for layouts, provide density-specific resources, and ensure your UI scales appropriately. Correctly support external peripherals: When users connect to an external monitor, they often create a more desktop-like environment. This frequently involves using external keyboards, mice, trackpads, webcams, microphones, and speakers. Improve the support for keyboard and mouse interactions. Building for the desktop future with modern tools We provide several tools to help you build the desktop experience. Let’s recap the latest updates to our core adaptive libraries! New window size classes: Large and Extra-large The biggest update in Jetpack WindowManager 1.5.0 is the addition of two new width window size classes: Large and Extra-large. Window size classes are our official, opinionated set of viewport breakpoints that help you design and develop adaptive layouts. With 1.5.0, we're extending this guidance for screens that go beyond the size of typical tablets. Here are the new width breakpoints: Large: For widths between 1200dp and 1600dp Extra-large: For widths ≥1600dp The different window size classes based on display width. On very large surfaces, simply scaling up a tablet's Expanded layout isn't always the best user experience. An email client, for example, might comfortably show two panes (a mailbox and a message) in the Expanded window size class. But on an Extra-large desktop monitor, the email client could elegantly display three or even four panes, perhaps a mailbox, a message list, the full message content, and a calendar/tasks panel, all at once. To include the new window size classes in your project, simply call the function from the WindowSizeClass.BREAKPOINTS_V2 set instead of WindowSizeClass.BREAKPOINTS_V1: val currentWindowMetrics = WindowMetricsCalculator.getOrCreate() .computeCurrentWindowMetrics(LocalContext.current) val sizeClass = WindowSizeClass.BREAKPOINTS_V2 .computeWindowSizeClass(currentWindowMetrics) Then apply the correct layout when you’re sure your app has at least that much space: if(sizeClass.isWidthAtLeastBreakpoint( WindowSizeClass.WIDTH_DP_LARGE_LOWER_BOUND)){ ... // Window is at least 1200 dp wide. } Build adaptive layouts with Jetpack Navigation 3 Navigation 3 is the latest addition to the Jetpack collection. Navigation 3, which just reached its first stable release, is a powerful navigation library designed to work with Compose. Navigation 3 is also a great tool for building adaptive layouts by allowing multiple destinations to be displayed at the same time and allowing seamless switching between those layouts. This system for managing your app's UI flow is based on Scenes. A Scene is a layout that displays one or more destinations at the same time. A SceneStrategy determines whether it can create a Scene. Chaining SceneStrategy instances together allows you to create and display different scenes for different screen sizes and device configurations. For out-of-the-box canonical layouts, like list-detail and supporting pane, you can use the Scenes from the Compose Material 3 Adaptive library (available in version 1.3 and above). It's also easy to build your own custom Scenes by modifying the Scene recipes or starting from scratch. For example, let’s consider a Scene that displays three panes side by side: class ThreePaneScene<T : Any>( override val key: Any, override val previousEntries: List<NavEntry<T>>, val firstEntry: NavEntry<T>, val secondEntry: NavEntry<T>, val thirdEntry: NavEntry<T> ) : Scene<T> { override val entries: List<NavEntry<T>> = listOf(firstEntry, secondEntry, thirdEntry) override val content: @Composable (() -> Unit) = { Row(modifier = Modifier.fillMaxSize()) { Column(modifier = Modifier.weight(1f)) { firstEntry.Content() } Column(modifier = Modifier.weight(1f)) { secondEntry.Content() } Column(modifier = Modifier.weight(1f)) { thirdEntry.Content() } } } In this scenario, you could define a SceneStrategy to show three panes if the window width is wide enough and the entries from your back stack have declared that they support being displayed in a three-pane scene. class ThreePaneSceneStrategy<T : Any>(val windowSizeClass: WindowSizeClass) : SceneStrategy<T> { override fun SceneStrategyScope<T>.calculateScene(entries: List<NavEntry<T>>): Scene<T>? { if (windowSizeClass.isWidthAtLeastBreakpoint(WIDTH_DP_LARGE_LOWER_BOUND)) { val lastThree = entries.takeLast(3) if (lastThree.size == 3 && lastThree.all { it.metadata.containsKey(MULTI_PANE_KEY) }) { val firstEntry = lastThree[0] val secondEntry = lastThree[1] val thirdEntry = lastThree[2] return ThreePaneScene( key = Triple(firstEntry.contentKey, secondEntry.contentKey, thirdEntry.contentKey), previousEntries = entries.dropLast(3), firstEntry = firstEntry, secondEntry = secondEntry, thirdEntry = thirdEntry ) } } return null } } You can use your ThreePaneSceneStrategy with other strategies when creating your NavDisplay. For example, we could also add a TwoPaneStrategy to display two panes side by side when there isn't enough space to show three. val strategy = ThreePaneSceneStrategy() then TwoPaneSceneStrategy() NavDisplay(..., sceneStrategy = strategy, entryProvider = entryProvider { entry<MyScreen>(metadata = mapOf(MULTI_PANE_KEY to true))) { ... } ... other entries... } ) If there isn't enough space to display three or two panes—both our custom scene strategies return null. In this case, NavDisplay falls back to displaying the last entry in the back stack in a single pane using SinglePaneScene. By using scenes and strategies, you can add one, two, and three pane layouts to your app! An adaptive app showing three-pane navigation on wide screens. Checkout the documentation to learn more on how to create custom layouts using Scenes in Navigation 3. Standalone adaptive layouts If you need a standalone layout, the Compose Material 3 Adaptive library helps you create adaptive UIs like list-detail and supporting pane layouts that adapt themselves to window configurations automatically based on window size classes or device postures. The good news is that the library is already up to date with the new breakpoints! Starting from version 1.2, the default pane scaffold directive functions support Large and Extra-large width window size classes. You only need to opt-in by declaring in your Gradle build file that you want to use the new breakpoints: currentWindowAdaptiveInfo(supportLargeAndXLargeWidth = true) Getting started Explore the connected display feature in the latest Android release. Get Android 16 QPR3 on a supported device, then connect it to an external monitor to start testing your app today! Dive into the updated documentation on multi-display support and window management to learn more about implementing these best practices. Feedback Your feedback is crucial as we continue to refine the connected display desktop experience. Share your thoughts and report any issues through our official feedback channels. We're committed to making Android a versatile platform that adapts to the many ways users want to interact with their apps and devices. The improvements to connected display support are another step in that direction, and we think your users will love the desktop experiences you'll build! *Note: At the time the article is written, connected displays are supported on Pixel 8, 9, 10 series and on a wide array of Samsung devices, including S26, Fold7, Flip7, and Tab S11.
Posted by Matt Dyor, Senior Product Manager Android Studio Panda 2 is now stable and ready for you to use in production. This release brings new agentic capabilities to Android Studio, enabling the agent to create an entire working application from scratch with the AI-powered New Project flow, and allowing the agent to automate the manual work of dependency updates. Whether you're building your first prototype or maintaining a large, established codebase, these updates bring new efficiency to your workflow by enabling Gemini in Android Studio to help more than ever. Here’s a deep dive into what’s new: Create New Projects with AI Say goodbye to boilerplate starter templates that just get you to the start line. With the AI-powered New Project flow, you can now build a working app prototype with just a single prompt. The agent reduces the time you spend setting up dependencies, writing boilerplate code, and creating basic navigation, allowing you to focus on the creative aspects of app development. The AI-powered New Project flow allows you to describe exactly what you want to build - you can even upload images for style inspiration. The agent then creates a detailed project plan for your review. When you're ready, the agent turns your plan into a first draft of your app using Android best practices, including Kotlin, Compose, and the latest stable libraries. Under your direction, it creates an autonomous generation loop: it generates the necessary code, builds the project, analyzes any build errors, and attempts to self-correct the code, looping until your project builds successfully. It then deploys your app to an Android Emulator and walks through each screen, verifying that the implementation works correctly and is true to your original request. Whether you need a simple single-screen layout, a multi-page app with navigation, or even an application integrated with Gemini APIs, the AI-powered New Project flow can handle it. Getting Started To use the agent to set up a project, do the following: Start Android Studio. Select New Project on the Welcome to Android Studio screen (or File > New > New Project from within a project) Select Create with AI. Type your prompt into the text entry field and click Next. For best results we recommend using a paid Gemini API key or third-party remote model. Create a New Project with AI in Android Studio 5. Name your app and click Finish to start the generation process. 6. Validate the finished app using the project plan and by running your app in the Android Emulator or on an Android device. AI-powered New Project flow For more details on the New Project flow, check out the official documentation. Share What You Build We want to hear from you and see the apps you’re able to build using the New Project flow. Share your apps with us by using #AndroidStudio in your social posts. We’ll be amplifying some of your submissions on our social channels. Unlock more with your Gemini API key While the agent works out-of-the-box using Android Studio's default no-cost model, providing your own Google AI Studio API key unlocks the full potential of the assistant. By connecting a paid Gemini API key, you get access to the fastest and latest models from Google. It also allows the New Project flow to access Nano Banana, our best model for image generation, in order to ideate on UI design — allowing the agent to create richer, higher fidelity application designs. In the AI-powered New Project flow, this increased capability means larger context windows for more tailored generation, as well as superior code quality. Furthermore, because the Agent uses Nano Banana behind the scenes for enhanced design generation, your prototype doesn't just work well—it features visually appealing, modern UI layouts and looks professional from the get go. Version Upgrade Assistant Keeping your project dependencies up to date is time-consuming and often causes cascading build errors. You fix one issue by updating a dependency, only to introduce a new issue somewhere else. The Version Upgrade Assistant in Android Studio just made that a problem of the past. You can now let AI do the heavy lifting of managing dependencies and boilerplate so you can focus on creating unique experiences for your users. To use this feature, simply right-click in your version catalog, select AI, and then Update Dependencies. Version Upgrade Assistant accessed from Version Catalog You can also access the Version Upgrade Assistant from the Refactor menu—just choose Update all libraries with AI. Version Upgrade Assistant accessed from the Refactor menu The agent runs multiple automated rounds—attempting builds, reading error messages, and adjusting versions—until the build succeeds. Instead of manually fighting through dependency conflicts, you can let the agent handle the iterative process of finding a stable configuration for you. Read the documentation for more information on Version Upgrade Assistant. Gemini 3.1 Pro is available in Android Studio We released Gemini 3.1 Pro preview, and it is even better than Gemini 3 Pro for reasoning and intelligence. You can access it in Android Studio by plugging in your Gemini API key. Put the new model to work on your toughest bugs, code completion, and UI logic. Let us know what you think of the new model. Gemini 3.1 Pro Now Available in Android Studio Get started Dive in and accelerate your development. Download Android Studio Panda 2 and start exploring these powerful new agentic features today. As always, your feedback is crucial to us. Check known issues, report bugs, and be part of our vibrant community on LinkedIn, Medium, YouTube, or X. Happy coding!
Posted by Trevor Johns, Developer Relations Engineer In January we announced Android Studio Otter 3 Feature Drop in stable, including Agent Mode enhancements and many other updates to provide more control and flexibility over using AI to help you build high quality Android apps. To help you get the most out of Gemini in Android Studio and all the new capabilities, we sat down with Google engineers and Google Developer Experts to gather their best practices for working with the latest features—including Agent mode and the New Project Assistant. Here are some useful insights to help you get the best out of your development: Build apps from scratch with the New Project Assistant The new Project Assistant—now available in the latest Canary builds—integrates Gemini with the Studio's New Project wizard. By simply providing prompts and (optionally) design mockups, you can generate entire applications from scratch, including scaffolding, architecture, and Jetpack Compose layouts. Integrated with the Android Emulator, it can deploy your build and "walk through" the app, making sure it’s functioning correctly and that the rendered screens actually match your vision. Additionally, you can use Agent Mode to then continue to work on the app and iterate, leveraging Gemini to refine your app to fit your vision. Also, while this feature works with the default (no-cost) model, we highly recommend using this feature with an AI Studio API Key to access the latest models — like Gemini 3.1 Pro or 3.0 Flash — which excel at agentic workflows. Additionally, adding your API Key allows the New Project Assistant to use Nano Banana behind the scenes to help with ideating on UI design, improving the visual fidelity of the generated application! - Trevor Johns, Developer Relations Engineer. Dialog for setting up a new project. 2. Ask the Agent to refine your code by providing it with ‘intentional’ contexts When using Gemini Agents, the quality of the output is directly tied to the boundaries you set. Don't just ask it to "fix this code"— be very intentional with the context that you provide it and be specific about what you want (and what you don't). Improve the output by providing recent blogs or docs so the model can make accurate suggestions based on these. Ask the Agent to simplify complex logic, or if it see’s any fundamental problems with it, or even ask it to scan for security risks in areas where you feel uncertain. Being firm with your instructions—even telling the model "please do not invent things" in instances where you are using very new or experimental APIs—helps keep the AI focused on the outputs you are trying to achieve. - Alejandra Stamato, Android Google Developer Expert and Android Engineer at HubSpot. 3. Use documentation with Agent mode to provide context for new libraries To prevent the model from hallucinating code for niche or brand-new libraries, leverage Android Studio’s Agent tools, to have access to documentation: Search Android Docs and Fetch Android Docs. You can direct Gemini to search the Android Knowledge Base or specific documentation articles. The model can choose to use this if it thinks it’s missing some information, which is good especially when you use niche API’s, or one’s which aren’t as common. If you are certain you want the model to consult the documentation and to make sure those tools are triggered, a good trick is to add something like ‘search the official documentation’ or ‘check the docs’ to your prompts. And for documentation on different libraries which aren’t Android specific, install a MCP Server that lets you access documentation like Context7 (or something similar). - Jose Alcérreca, Android Developer Relations Engineer, Google. 4. Use AI to help build Agents.md files for using custom frameworks, libraries and design systems To make sure Agent uses custom frameworks, libraries and design systems you have two options 1) In settings, Android Studio allows you to specify rules to be followed when Gemini is performing these actions for you. Or 2) Create Agents.md files in your application and specify how things should be done or act as guidance for when AI is performing a task, specific frameworks, design systems, or specific ways of doing things (such as the exact architecture, things to do or what not to do), in a standard bullet point way to give AI clear instructions. Manage AGENTS.md files as context. You can also use Agents.md file at the root of the project, and can have them in different modules (or even subdirectories) of your project as well! The more context you have or the more guidance available when you’re working, that will be available for AI to access. If you get stuck creating these Agents.md files you can use AI to help build them, or give you foundations based on the projects you have and then edit them so you don’t have to start from scratch. - Joe Birch, Android Google Developer Expert and Staff Engineer at Buffer. 5. Offload the tedious tasks to Agent and save yourself time You can get Gemini in Android Studio agent to help you make tasks such as writing and reviewing faster. For example it can help writing commit messages, giving you a good summary which you can then review and save yourself time. Additionally, get it to write tests; under your direction the Agent can look at the other tests in your project and write a good test for you to run following best practices just by looking at them. Another good example of a tedious task is writing a new parser for a certain JSON format. Just give Gemini a few examples and it will get you started very quickly. - Diego Perez, Android Software Engineer, Google 6. Control what you are sharing with AI using simple opt-outs or commands, alongside paid models. If you want to control what is shared with AI whilst on the no-cost plans, you can opt out some or all your code from model training by adding an AI exclusions file (‘.aiexclude’) to your project. This file uses glob pattern matching similar to a .gitignore file, specifying sensitive directories or files that should be hidden from the AI. You can place .aiexclude files anywhere within the project and its VCS roots to control which files AI features are allowed to access. An example of an `.aiexclude` file in Android Studio. Alternatively, in Android Studio settings, you can also opt out of context sharing either on a per project or per user basis (although this method limits the functionality of a number of features because the AI won’t see your code). Remember, paid plans never use your code for model training. This includes both users using an AI Studio API Key, and businesses who are subscribed to Gemini Code Assist. - Trevor Johns, Developer Relations Engineer. Hear more from the Android team and Google Developer Experts about Gemini in Android Studio in our recent fireside chat and download Android Studio to get started.
Posted by Matthew McCullough, VP Product Management, Android Developer Today we're releasing the second beta of Android 17, continuing our work to build a platform that prioritizes privacy, security, and refined performance. This update delivers a range of new capabilities, including the EyeDropper API and a privacy-preserving Contacts Picker. We're also adding advanced ranging, cross-device handoff APIs, and more. This release continues the shift in our release cadence, following this annual major SDK release in Q2 with a minor SDK update. User Experience & System UI Bubbles Bubbles is a windowing mode feature that offers a new floating UI experience separate from the messaging bubbles API. Users can create an app bubble on their phone, foldable, or tablet by long-pressing an app icon on the launcher. On large screens, there is a bubble bar as part of the taskbar where users can organize, move between, and move bubbles to and from anchored points on the screen. You should follow the guidelines for supporting multi-window mode to ensure your apps work correctly as bubbles. Bubbles aren't yet fully enabled in Beta 2. Look for them in a future build of Android 17. EyeDropper API A new system-level EyeDropper API allows your app to request a color from any pixel on the display without requiring sensitive screen capture permissions. val eyeDropperLauncher = registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result -> if (result.resultCode == Activity.RESULT_OK) { val color = result.data?.getIntExtra(Intent.EXTRA_COLOR, Color.BLACK) // Use the picked color in your app } } fun launchColorPicker() { val intent = Intent(Intent.ACTION_OPEN_EYE_DROPPER) eyeDropperLauncher.launch(intent) } Contacts Picker A new system-level contacts picker via ACTION_PICK_CONTACTS grants temporary, session-based read access to only the specific data fields requested by the user, reducing the need for the broad READ_CONTACTS permissions. It also allows for selections from the device’s personal or work profiles. val contactPicker = rememberLauncherForActivityResult(StartActivityForResult()) { if (it.resultCode == RESULT_OK) { val uri = it.data?.data ?: return@rememberLauncherForActivityResult // Handle result logic processContactPickerResults(uri) } } val dataFields = arrayListOf(Email.CONTENT_ITEM_TYPE, Phone.CONTENT_ITEM_TYPE) val intent = Intent(ACTION_PICK_CONTACTS).apply { putStringArrayListExtra(EXTRA_PICK_CONTACTS_REQUESTED_DATA_FIELDS, dataFields) putExtra(EXTRA_ALLOW_MULTIPLE, true) putExtra(EXTRA_PICK_CONTACTS_SELECTION_LIMIT, 5) } contactPicker.launch(intent) Easier pointer capture compatibility with touchpads Previously, touchpads reported events in a very different way from mice when an app had captured the pointer, reporting the locations of fingers on the pad rather than the relative movements that would be reported by a mouse. This made it quite difficult to support touchpads properly in first-person games. Now, by default the system will recognize pointer movement and scrolling gestures when the touchpad is captured, and report them just like mouse events. You can still request the old, detailed finger location data by explicitly requesting capture in the new “absolute” mode. // To request the new default relative mode (mouse-like events) // This is the same as requesting with View.POINTER_CAPTURE_MODE_RELATIVE view.requestPointerCapture() // To request the legacy absolute mode (raw touch coordinates) view.requestPointerCapture(View.POINTER_CAPTURE_MODE_ABSOLUTE) Interactive Chooser resting bounds By calling getInitialRestingBounds on Android's ChooserSession, your app can identify the target position the Chooser occupies after animations and data loading are complete, enabling better UI adjustments. Connectivity & Cross-Device Cross-device app handoff A new Handoff API allows you to specify application state to be resumed on another device, such as an Android tablet. When opted in, the system synchronizes state via CompanionDeviceManager and displays a handoff suggestion in the launcher of the user's nearby devices. This feature is designed to offer seamless task continuity, enabling users to pick up exactly where they left off in their workflow across their Android ecosystem. Critically, Handoff supports both native app-to-app transitions and app-to-web fallback, providing maximum flexibility and ensuring a complete experience even if the native app is not installed on the receiving device. Advanced ranging APIs We are adding support for 2 new ranging technologies - UWB DL-TDOA which enables apps to use UWB for indoor navigation. This API surface is FIRA (Fine Ranging Consortium) 4.0 DL-TDOA spec compliant and enables privacy preserving indoor navigation (avoiding tracking of the device by the anchor). Proximity Detection which enables apps to use the new ranging specification being adopted by WFA (WiFi Alliance). This technology provides improved reliability and accuracy compared to existing Wifi Aware based ranging specification. Data plan enhancements To optimize media quality, your app can now retrieve carrier-allocated maximum data rates for streaming applications using getStreamingAppMaxDownlinkKbps and getStreamingAppMaxUplinkKbps. Core Functionality, Privacy & Performance Local Network Access Android 17 introduces the ACCESS_LOCAL_NETWORK runtime permission to protect users from unauthorized local network access. Because this falls under the existing NEARBY_DEVICES permission group, users who have already granted other NEARBY_DEVICES permissions will not be prompted again. By declaring and requesting this permission, your app can discover and connect to devices on the local area network (LAN), such as smart home devices or casting receivers. This prevents malicious apps from exploiting unrestricted local network access for covert user tracking and fingerprinting. Apps targeting Android 17 or higher will now have two paths to maintain communication with LAN devices: adopt system-mediated, privacy-preserving device pickers to skip the permission prompt, or explicitly request this new permission at runtime to maintain local network communication. Time zone offset change broadcast Android now provides a reliable broadcast intent, ACTION_TIMEZONE_OFFSET_CHANGED, triggered when the system's time zone offset changes, such as during Daylight Saving Time transitions. This complements the existing broadcast intents ACTION_TIME_CHANGED and ACTION_TIMEZONE_CHANGED, which are triggered when the Unix timestamp changes and when the time zone ID changes, respectively. NPU Management and Prioritization Apps targeting Android 17 that need to directly access the NPU must declare FEATURE_NEURAL_PROCESSING_UNIT in their manifest to avoid being blocked from accessing the NPU. This includes apps that use the LiteRT NPU delegate, vendor-specific SDKs, as well as the deprecated NNAPI. ICU 78 and Unicode 17 support Core internationalization libraries have been updated to ICU 78, expanding support for new scripts, characters, and emoji blocks, and enabling direct formatting of time objects. SMS OTP protection Android is expanding its SMS OTP protection by automatically delaying access to SMS messages with OTP. Previously, the protection was primarily focused on the SMS Retriever format wherein the delivery of messages containing an SMS retriever hash is delayed for most apps for three hours. However, for certain apps like the default SMS app, etc and the app that corresponds to the hash are exempt from this delay. This update extends the protection to all SMS messages with OTP. For most apps, SMS messages containing an OTP will only be accessible after a delay of three hours to help prevent OTP hijacking. The SMS_RECEIVED_ACTION broadcast will be withheld and sms provider database queries will be filtered. The SMS message will be available to these apps after the delay. Delayed access to WebOTP format SMS messages If the app has the permission to read SMS messages but is not the intended recipient of the OTP (as determined by domain verification), the WebOTP format SMS message will only be accessible after three hours have elapsed. This change is designed to improve user security by ensuring that only apps associated with the domain mentioned in the message can programmatically read the verification code. This change applies to all apps regardless of their target API level. Delayed access to standard SMS messages with OTP For SMS messages containing an OTP that do not use the WebOTP or SMS Retriever formats, the OTP SMS will only be accessible after three hours for most apps. This change only applies to apps that target Android 17 (API level 37) or higher. Certain apps such as the default SMS, assistant app, along with connected device companion apps, etc will be exempt from this delay. All apps that rely on reading SMS messages for OTP extraction should transition to using SMS Retriever or SMS User Consent APIs to ensure continued functionality. The Android 17 schedule We're going to be moving quickly from this Beta to our Platform Stability milestone, targeted for March. At this milestone, we'll deliver final SDK/NDK APIs. From that time forward, your app can target SDK 37 and publish to Google Play to help you complete your testing and collect user feedback in the several months before the general availability of Android 17. A year of releases We plan for Android 17 to continue to get updates in a series of quarterly releases. The upcoming release in Q2 is the only one where we introduce planned app breaking behavior changes. We plan to have a minor SDK release in Q4 with additional APIs and features. Get started with Android 17 You can enroll any supported Pixel device to get this and future Android Beta updates over-the-air. If you don’t have a Pixel device, you can use the 64-bit system images with the Android Emulator in Android Studio. If you are currently in the Android Beta program, you will be offered an over-the-air update to Beta 2. If you have Android 26Q1 Beta and would like to take the final stable release of 26Q1 and exit Beta, you need to ignore the over-the-air update to 26Q2 Beta 2 and wait for the release of 26Q1. We're looking for your feedback so please report issues and submit feature requests on the feedback page. The earlier we get your feedback, the more we can include in our work on the final release. For the best development experience with Android 17, we recommend that you use the latest preview of Android Studio (Panda). Once you’re set up, here are some of the things you should do: Compile against the new SDK, test in CI environments, and report any issues in our tracker on the feedback page. Test your current app for compatibility, learn whether your app is affected by changes in Android 17, and install your app onto a device or emulator running Android 17 and extensively test it. We’ll update the preview/beta system images and SDK regularly throughout the Android 17 release cycle. Once you’ve installed a beta build, you’ll automatically get future updates over-the-air for all later previews and Betas. For complete information, visit the Android 17 developer site. Join the conversation As we move toward Platform Stability and the general availability of Android 17 later this year, your feedback remains our most valuable asset. Whether you’re an early adopter on the Canary channel or an app developer testing on Beta 2, consider joining our communities and filing feedback. We’re listening.
Posted by Matthew McCullough, VP of Product Management, Android Development User expectations for AI on their devices are fundamentally shifting how they interact with their apps. Instead of opening apps to do tasks step-by-step, they're asking AI to do the heavy lifting for them. In this new interaction model, success is shifting from getting users to open your app, to successfully fulfilling their tasks and helping them get more done faster. To help you evolve your apps for this agentic future, we're introducing early stage developer capabilities that bridge the gap between your apps and agentic apps and personalized assistants, such as Google Gemini. While we are in the early, beta stages of this journey, we’re designing these features with privacy and security at their core as our first step in exploring this paradigm shift as an app ecosystem. Empowering apps with AppFunctions Android AppFunctions allows apps to expose data and functionality directly to AI agents and assistants. With the AppFunctions Jetpack library and platform APIs, developers can create self-describing functions that agentic apps can discover and execute via natural language. Mirroring how backend capabilities are declared via MCP cloud servers, AppFunctions provides an on-device solution for Android apps. Much like WebMCP, it executes these functions locally on the device rather than on a server. The Samsung Gallery integration with Gemini on the Galaxy S26 series showcases AppFunctions in action. Instead of manually scrolling through photo albums, you can now simply ask Gemini to "Show me pictures of my cat from Samsung Gallery." Gemini takes the user query, intelligently identifies and triggers the right function, and presents the returned photos from Samsung Gallery directly in the Gemini app, so users never need to leave. This experience is multimodal and can be done via voice or text. Users can even use the returned photos in follow-up conversations, like sending them to friends in a text message. This integration is currently available on the Galaxy S26 series and will soon expand to Samsung devices running OneUI 8.5 and higher. Through AppFunctions, Gemini can already automate tasks across app categories like Calendar, Notes, and Tasks, on devices from multiple manufacturers. Whether it’s coordinating calendar events, organizing notes, or setting to-do reminders, users can streamline daily activities in one place. Enabling agentic apps with intelligent UI automation While AppFunctions provides a structured framework and more control for apps to communicate with AI agents and assistants, we know that not every interaction has a dedicated integration yet. We’re also developing a UI automation framework for AI agents and assistants to intelligently execute generic tasks on users’ installed apps, with user transparency and control built in. This is the platform doing the heavy lifting, so developers can get agentic reach with zero code. It’s a low-effort way to extend their reach without a major engineering lift right now. To get feedback as we refine this framework, we’re starting with an early preview on the Galaxy S26 series and select Pixel 10 devices, where users will be able to delegate multi-step tasks to Gemini with just a long press of the power button. Launching as a beta feature in the Gemini app, this will support a curated selection of apps in the food delivery, grocery, and rideshare categories in the US and Korea to start. Whether users need to place a complex pizza order for their family members with particular tastes, coordinate a multi-stop rideshare with co-workers, or reorder their last grocery purchase, Gemini can help complete tasks using the context already available from your apps, without any developer work needed. Users are in control while a task is being actioned in the background through UI automation. For any automation action, users have the option to monitor a task’s progress via notifications or "live view" and can switch to manual control at any point to take over the experience. Gemini is also designed to alert users before completing sensitive tasks, such as making a purchase. Looking ahead In Android 17, we’re looking to broaden these capabilities to reach even more users, developers, and device manufacturers. We are currently building experiences with a small set of app developers, focusing on high-quality user experiences as the ecosystem evolves. We plan to share more details later this year on how you can use AppFunctions and UI automation to enable agentic integrations for your app. Stay tuned for updates.
