Руководство: Начало работы с Gemini API


Этот учебник демонстрирует, как получить доступ к API Gemini непосредственно из вашего приложения Android, используя Google AI Client SDK для Android. Вы можете использовать этот клиент SDK, если вы не хотите работать напрямую с API REST или кодом на стороне сервера (например, Python) для доступа к моделям Gemini в вашем приложении Android.

В этом уроке вы узнаете, как сделать следующее:

Кроме того, этот урок содержит разделы о расширенных вариантах использования (например, подсчет токенов ), а также опции для управления генерацией контента .

Подумайте о доступе к Близнецам на устройстве

Клиент SDK для Android, описанный в этом уроке, позволяет вам получить доступ к моделям Gemini Pro, которые работают на серверах Google . Для вариантов использования, которые включают обработку конфиденциальных данных, доступность в автономном режиме или для экономии затрат для часто используемых пользовательских потоков, вы можете рассмотреть возможность доступа к Nano Gemini, который работает на устройстве . Для получения более подробной информации обратитесь к учебному пособию Android (On-Device) .

Предварительные условия

Этот учебник предполагает, что вы знакомы с использованием Android Studio для разработки приложений Android.

Чтобы завершить этот урок, убедитесь, что ваша среда разработки и приложение для Android соответствует следующим требованиям:

  • Android Studio (последняя версия)
  • Ваше приложение для Android должно нацелиться на уровень API 21 или выше.

Настройте свой проект

Прежде чем вызывать API Gemini, вам необходимо настроить свой проект Android, который включает настройку ключа API, добавление зависимостей SDK в ваш проект Android и инициализацию модели.

Установите свой ключ API

Чтобы использовать API Gemini, вам понадобится ключ API. Если у вас его еще нет, создайте ключ в Google AI Studio.

Получите ключ API

Закрепите свой ключ API

Настоятельно рекомендуется, чтобы вы не проверяли ключ API в системе управления версиями. Вместо этого вы должны сохранить его в файле local.properties (который находится в корневом каталоге вашего проекта, но исключен из управления версией), а затем использовать плагин Secrets Gradle для Android , чтобы прочитать ваш ключ API в качестве переменной конфигурации сборки.

Котлин

// Access your API key as a Build Configuration variable
val apiKey = BuildConfig.apiKey

Джава

// Access your API key as a Build Configuration variable
String apiKey = BuildConfig.apiKey;

All the snippets in this tutorial utilize this best practice. Also, if you want to see the implementation of the Secrets Gradle plugin, you can review the sample app for this SDK or use the latest preview of Android Studio Iguana which has a Gemini API Starter template (which includes the local.properties file to get ты начал).

Add the SDK dependency to your project

  1. In your module (app-level) Gradle configuration file (like <project>/<app-module>/build.gradle.kts ), add the dependency for the Google AI SDK for Android:

    Котлин

    dependencies {
      // ... other androidx dependencies
    
      // add the dependency for the Google AI client SDK for Android
      implementation("com.google.ai.client.generativeai:generativeai:0.6.0")
    }
    

    Джава

    For Java, you need to add two additional libraries.

    dependencies {
        // ... other androidx dependencies
    
        // add the dependency for the Google AI client SDK for Android
        implementation("com.google.ai.client.generativeai:generativeai:0.6.0")
    
        // Required for one-shot operations (to use `ListenableFuture` from Guava Android)
        implementation("com.google.guava:guava:31.0.1-android")
    
        // Required for streaming operations (to use `Publisher` from Reactive Streams)
        implementation("org.reactivestreams:reactive-streams:1.0.4")
    }
    
  2. Sync your Android project with Gradle files.

Initialize the generative model

Before you can make any API calls, you need to initialize the generative model:

Котлин

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with most use cases
    modelName = "gemini-1.5-flash",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey
)

Джава

For Java, you also need to initialize the GenerativeModelFutures object.

// Use a model that's applicable for your use case
// The Gemini 1.5 models are versatile and work with most use cases
GenerativeModel gm = new GenerativeModel(/* modelName */ "gemini-1.5-flash",
// Access your API key as a Build Configuration variable (see "Set up your API key" above)
    /* apiKey */ BuildConfig.apiKey);

// Use the GenerativeModelFutures Java compatibility layer which offers
// support for ListenableFuture and Publisher APIs
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

When specifying a model, note the following:

  • Use a model that's specific to your use case (for example, gemini-pro-vision is for multimodal input). Within this guide, the instructions for each implementation list the recommended model for each use case.

Implement common use cases

Now that your project is set up, you can explore using the Gemini API to implement different use cases:

Generate text from text-only input

When the prompt input includes only text, use a Gemini 1.5 model or the Gemini 1.0 Pro model with generateContent to generate text output:

Котлин

Note that generateContent() is a suspend function and needs to be called from a Coroutine scope. If you're unfamiliar with Coroutines, read Kotlin Coroutines on Android .

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
    modelName = "gemini-1.5-flash",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey
)

val prompt = "Write a story about a magic backpack."
val response = generativeModel.generateContent(prompt)
print(response.text)

Джава

Note that generateContent() returns a ListenableFuture . If you're unfamiliar with this API, see the Android documentation about Using a ListenableFuture .

// The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
GenerativeModel gm = new GenerativeModel(/* modelName */ "gemini-1.5-flash",
// Access your API key as a Build Configuration variable (see "Set up your API key" above)
    /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Content content = new Content.Builder()
    .addText("Write a story about a magic backpack.")
    .build();

Executor executor = // ...

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Generate text from text-and-image input (multimodal)

Gemini provides various models that can handle multimodal input (Gemini 1.5 models and Gemini 1.0 Pro Vision) so that you can input both text and images. Make sure to review the image requirements for prompts .

When the prompt input includes both text and images, use a Gemini 1.5 model or the Gemini 1.0 Pro Vision model with generateContent to generate text output:

Котлин

Note that generateContent() is a suspend function and needs to be called from a Coroutine scope. If you're unfamiliar with Coroutines, read Kotlin Coroutines on Android .

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
    modelName = "gemini-1.5-flash",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey
)

val image1: Bitmap = // ...
val image2: Bitmap = // ...

val inputContent = content {
    image(image1)
    image(image2)
    text("What's different between these pictures?")
}

val response = generativeModel.generateContent(inputContent)
print(response.text)

Джава

Note that generateContent() returns a ListenableFuture . If you're unfamiliar with this API, see the Android documentation about Using a ListenableFuture .

// The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
GenerativeModel gm = new GenerativeModel(/* modelName */ "gemini-1.5-flash",
// Access your API key as a Build Configuration variable (see "Set up your API key" above)
    /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image1 = // ...
Bitmap image2 = // ...

Content content = new Content.Builder()
    .addText("What's different between these pictures?")
    .addImage(image1)
    .addImage(image2)
    .build();

Executor executor = // ...

ListenableFuture<GenerateContentResponse> response = model.generateContent(content);
Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Build multi-turn conversations (chat)

Using Gemini, you can build freeform conversations across multiple turns. The SDK simplifies the process by managing the state of the conversation, so unlike with generateContent , you don't have to store the conversation history yourself.

To build a multi-turn conversation (like chat), use a Gemini 1.5 model or the Gemini 1.0 Pro model, and initialize the chat by calling startChat() . Then use sendMessage() to send a new user message, which will also append the message and the response to the chat history.

There are two possible options for role associated with the content in a conversation:

  • user : the role which provides the prompts. This value is the default for sendMessage calls.

  • model : the role which provides the responses. This role can be used when calling startChat() with existing history .

Котлин

Note that generateContent() is a suspend function and needs to be called from a Coroutine scope. If you're unfamiliar with Coroutines, read Kotlin Coroutines on Android .

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with multi-turn conversations (like chat)
    modelName = "gemini-1.5-flash",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey
)

val chat = generativeModel.startChat(
    history = listOf(
        content(role = "user") { text("Hello, I have 2 dogs in my house.") },
        content(role = "model") { text("Great to meet you. What would you like to know?") }
    )
)

chat.sendMessage("How many paws are in my house?")

Джава

Note that generateContent() returns a ListenableFuture . If you're unfamiliar with this API, see the Android documentation about Using a ListenableFuture .

// The Gemini 1.5 models are versatile and work with multi-turn conversations (like chat)
GenerativeModel gm = new GenerativeModel(/* modelName */ "gemini-1.5-flash",
// Access your API key as a Build Configuration variable (see "Set up your API key" above)
    /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

// (optional) Create previous chat history for context
Content.Builder userContentBuilder = new Content.Builder();
userContentBuilder.setRole("user");
userContentBuilder.addText("Hello, I have 2 dogs in my house.");
Content userContent = userContentBuilder.build();

Content.Builder modelContentBuilder = new Content.Builder();
modelContentBuilder.setRole("model");
modelContentBuilder.addText("Great to meet you. What would you like to know?");
Content modelContent = userContentBuilder.build();

List<Content> history = Arrays.asList(userContent, modelContent);

// Initialize the chat
ChatFutures chat = model.startChat(history);

// Create a new user message
Content userMessage = new Content.Builder()
    .setRole("user")
    .addText("How many paws are in my house?")
    .build();

Executor executor = // ...

// Send the message
ListenableFuture<GenerateContentResponse> response = chat.sendMessage(userMessage);

Futures.addCallback(response, new FutureCallback<GenerateContentResponse>() {
    @Override
    public void onSuccess(GenerateContentResponse result) {
        String resultText = result.getText();
        System.out.println(resultText);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

Use streaming for faster interactions

By default, the model returns a response after completing the entire generation process. You can achieve faster interactions by not waiting for the entire result, and instead use streaming to handle partial results.

The following example shows how to implement streaming with generateContentStream to generate text from a text-and-image input prompt.

Котлин

Note that generateContentStream() is a suspend function and needs to be called from a Coroutine scope. If you're unfamiliar with Coroutines, read Kotlin Coroutines on Android .

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
    modelName = "gemini-1.5-flash",
    // Access your API key as a Build Configuration variable (see "Set up your API key" above)
    apiKey = BuildConfig.apiKey
)

val image1: Bitmap = // ...
val image2: Bitmap = // ...

val inputContent = content {
    image(image1)
    image(image2)
    text("What's the difference between these pictures?")
}

var fullResponse = ""
generativeModel.generateContentStream(inputContent).collect { chunk ->
    print(chunk.text)
    fullResponse += chunk.text
}

Джава

The Java streaming methods in this SDK return a Publisher type from the Reactive Streams library.

// The Gemini 1.5 models are versatile and work with both text-only and multimodal prompts
GenerativeModel gm = new GenerativeModel(/* modelName */ "gemini-1.5-flash",
// Access your API key as a Build Configuration variable (see "Set up your API key" above)
    /* apiKey */ BuildConfig.apiKey);
GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Bitmap image1 = // ...
Bitmap image2 = // ...

Content content = new Content.Builder()
    .addText("What's different between these pictures?")
    .addImage(image1)
    .addImage(image2)
    .build();

Publisher<GenerateContentResponse> streamingResponse =
    model.generateContentStream(content);

final String[] fullResponse = {""};

streamingResponse.subscribe(new Subscriber<GenerateContentResponse>() {
    @Override
    public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        fullResponse[0] += chunk;
    }

    @Override
    public void onComplete() {
        System.out.println(fullResponse[0]);
    }

    @Override
    public void onError(Throwable t) {
        t.printStackTrace();
    }

    @Override
    public void onSubscribe(Subscription s) { }
});

You can use a similar approach for text-only input and chat use cases:

Котлин

Note that generateContentStream() is a suspend function and needs to be called from a Coroutine scope. If you're unfamiliar with Coroutines, read Kotlin Coroutines on Android .

// Use streaming with text-only input
generativeModel.generateContentStream(inputContent).collect { chunk ->
    print(chunk.text)
}
// Use streaming with multi-turn conversations (like chat)
val chat = generativeModel.startChat()
chat.sendMessageStream(inputContent).collect { chunk ->
    print(chunk.text)
}

Джава

The Java streaming methods in this SDK return a Publisher type from the Reactive Streams library.

// Use streaming with text-only input
Publisher<GenerateContentResponse> streamingResponse =
    model.generateContentStream(inputContent);

final String[] fullResponse = {""};

streamingResponse.subscribe(new Subscriber<GenerateContentResponse>() {
    @Override
    public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        fullResponse[0] += chunk;
    }

    @Override
    public void onComplete() {
        System.out.println(fullResponse[0]);
    }

    // ... other methods omitted for brevity
});
// Use streaming with multi-turn conversations (like chat)
ChatFutures chat = model.startChat(history);

Publisher<GenerateContentResponse> streamingResponse =
    chat.sendMessageStream(inputContent);

final String[] fullResponse = {""};

streamingResponse.subscribe(new Subscriber<GenerateContentResponse>() {
    @Override
    public void onNext(GenerateContentResponse generateContentResponse) {
        String chunk = generateContentResponse.getText();
        fullResponse[0] += chunk;
    }

    @Override
    public void onComplete() {
        System.out.println(fullResponse[0]);
    }

    // ... other methods omitted for brevity
});

Implement advanced use cases

The common use cases described in the previous section of this tutorial help you become comfortable with using the Gemini API. This section describes some use cases that might be considered more advanced.

Function calling

Function calling makes it easier for you to get structured data outputs from generative models. You can then use these outputs to call other APIs and return the relevant response data to the model. In other words, function calling helps you connect generative models to external systems so that the generated content includes the most up-to-date and accurate information. Learn more in the function calling tutorial .

Count tokens

When using long prompts, it might be useful to count tokens before sending any content to the model. The following examples show how to use countTokens() for various use cases:

Котлин

Note that countTokens() is a suspend function and needs to be called from a Coroutine scope. If you're unfamiliar with Coroutines, read Kotlin Coroutines on Android .

// For text-only input
val (totalTokens) = generativeModel.countTokens("Write a story about a magic backpack.")

// For text-and-image input (multi-modal)
val multiModalContent = content {
    image(image1)
    image(image2)
    text("What's the difference between these pictures?")
}

val (totalTokens) = generativeModel.countTokens(multiModalContent)

// For multi-turn conversations (like chat)
val history = chat.history
val messageContent = content { text("This is the message I intend to send")}
val (totalTokens) = generativeModel.countTokens(*history.toTypedArray(), messageContent)

Джава

Note that countTokens() returns a ListenableFuture . If you're unfamiliar with this API, see the Android documentation about Using a ListenableFuture .

Content text = new Content.Builder()
    .addText("Write a story about a magic backpack.")
    .build();

Executor executor = // ...

// For text-only input
ListenableFuture<CountTokensResponse> countTokensResponse = model.countTokens(text);

Futures.addCallback(countTokensResponse, new FutureCallback<CountTokensResponse>() {
    @Override
    public void onSuccess(CountTokensResponse result) {
        int totalTokens = result.getTotalTokens();
        System.out.println("TotalTokens = " + totalTokens);
    }

    @Override
    public void onFailure(Throwable t) {
        t.printStackTrace();
    }
}, executor);

// For text-and-image input
Bitmap image1 = // ...
Bitmap image2 = // ...

Content multiModalContent = new Content.Builder()
    .addImage(image1)
    .addImage(image2)
    .addText("What's different between these pictures?")
    .build();

ListenableFuture<CountTokensResponse> countTokensResponse = model.countTokens(multiModalContent);

// For multi-turn conversations (like chat)
List<Content> history = chat.getChat().getHistory();

Content messageContent = new Content.Builder()
    .addText("This is the message I intend to send")
    .build();

Collections.addAll(history, messageContent);

ListenableFuture<CountTokensResponse> countTokensResponse = model.countTokens(history.toArray(new Content[0]));

Options to control content generation

You can control content generation by configuring model parameters and by using safety settings.

Настройте параметры модели

Every prompt you send to the model includes parameter values that control how the model generates a response. The model can generate different results for different parameter values. Learn more about Model parameters .

Котлин

val config = generationConfig {
    temperature = 0.9f
    topK = 16
    topP = 0.1f
    maxOutputTokens = 200
    stopSequences = listOf("red")
}

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with most use cases
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey,
    generationConfig = config
)

Джава

GenerationConfig.Builder configBuilder = new GenerationConfig.Builder();
configBuilder.temperature = 0.9f;
configBuilder.topK = 16;
configBuilder.topP = 0.1f;
configBuilder.maxOutputTokens = 200;
configBuilder.stopSequences = Arrays.asList("red");

GenerationConfig generationConfig = configBuilder.build();

// The Gemini 1.5 models are versatile and work with most use cases
GenerativeModel gm = new GenerativeModel(
    "gemini-1.5-flash",
    BuildConfig.apiKey,
    generationConfig
);

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Use safety settings

You can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions. Learn more about Safety settings .

Here's how to set one safety setting:

Котлин

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with most use cases
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey,
    safetySettings = listOf(
        SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH)
    )
)

Джава

SafetySetting harassmentSafety = new SafetySetting(HarmCategory.HARASSMENT,
    BlockThreshold.ONLY_HIGH);

// The Gemini 1.5 models are versatile and work with most use cases
GenerativeModel gm = new GenerativeModel(
    "gemini-1.5-flash",
    BuildConfig.apiKey,
    null, // generation config is optional
    Collections.singletonList(harassmentSafety)
);

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

You can also set more than one safety setting:

Котлин

val harassmentSafety = SafetySetting(HarmCategory.HARASSMENT, BlockThreshold.ONLY_HIGH)

val hateSpeechSafety = SafetySetting(HarmCategory.HATE_SPEECH, BlockThreshold.MEDIUM_AND_ABOVE)

val generativeModel = GenerativeModel(
    // The Gemini 1.5 models are versatile and work with most use cases
    modelName = "gemini-1.5-flash",
    apiKey = BuildConfig.apiKey,
    safetySettings = listOf(harassmentSafety, hateSpeechSafety)
)

Джава

SafetySetting harassmentSafety = new SafetySetting(HarmCategory.HARASSMENT,
    BlockThreshold.ONLY_HIGH);

SafetySetting hateSpeechSafety = new SafetySetting(HarmCategory.HATE_SPEECH,
    BlockThreshold.MEDIUM_AND_ABOVE);

// The Gemini 1.5 models are versatile and work with most use cases
GenerativeModel gm = new GenerativeModel(
    "gemini-1.5-flash",
    BuildConfig.apiKey,
    null, // generation config is optional
    Arrays.asList(harassmentSafety, hateSpeechSafety)
);

GenerativeModelFutures model = GenerativeModelFutures.from(gm);

Что дальше

  • Prompt design is the process of creating prompts that elicit the desired response from language models. Writing well structured prompts is an essential part of ensuring accurate, high quality responses from a language model. Learn about best practices for prompt writing .

  • Gemini offers several model variations to meet the needs of different use cases, such as input types and complexity, implementations for chat or other dialog language tasks, and size constraints. Learn about the available Gemini models .

  • Gemini offers options for requesting rate limit increases . The rate limit for Gemini Pro models is 60 requests per minute (RPM).

  • The client SDK for Android described in this tutorial lets you access the Gemini Pro models which run on Google's servers . For use cases that involve processing sensitive data, offline availability, or for cost savings for frequently used user flows, you may want to consider accessing Gemini Nano which runs on-device . For more details, refer to the Android (on-device) tutorial .