TL;DR:
On this family summer trip to Asia, I've admittedly been relying heavily on Google Translate. As someone who lives in the world of APIs, that makes me think of "its API,"^ the Google Cloud Translation API. Pure translation, though, is not the same as finding the right words (although they're similar), and that makes me think of natural language understanding (NLU). When considering NLU and NLP (natural language processing), I think of the Cloud Natural Language API. While there's a brief intro to this API in another post on API keys, let's take a closer look at both of these language-flavored Google Cloud (GCP) APIs and how to use them with Python and Node.js.
^ -- The Google Translate product itself doesn't offer an official API, but the Cloud Translation API serves that purpose.
Introduction
If you're interested in Google APIs and how to access them from Python (and sometimes Node.js), you're in the right place. I especially like to present code and samples you won't find in Google's documentation (unless I wrote it). :-) I also aim for broad coverage across a wide variety of products and topics while leaning towards server-side functionality. Here are some covered so far:
- Google API credentials: API keys and OAuth client IDs
- Serverless computing, including the latest news with App Engine
- Generative AI with the Gemini API via Google AI or GCP Vertex AI
- Google Workspace (GWS) APIs, e.g., exporting Google Docs as PDF files
- Geolocation, time zones, and walking directions with Google Maps (GMP) APIs
Background and motivation
If you're interested in exploring text-based language processing but are new to AI/ML, aren't ready to train your own models yet nor ready to dive into LLMs (large language models), the Natural Language & Translation APIs from Google Cloud may be some of the simplest ways to get up-to-speed with AI/ML. With these APIs, you don't have to train models because they're already backed by pre-trained models from Google.
The purpose of this post is to give developers enough familiarity with these APIs so you can start using them right away in your next Python or Node.js projects. Client libraries encapsulate much of the required boilerplate and do a lot of the heavy lifting so developers can focus on using API features and solving problems sooner. The code samples in this post leverage the respective client library packages for those languages. Let's do a brief introduction to these APIs.
Translation API
The Cloud Translation API gives you the core functionality of a tool like Google Translate, allowing you to access that feature programmatically. It uses a pre-trained custom model for this purpose, and your apps can use it to translate an arbitrary string in a supported language to its equivalent in another supported language using state-of-the-art Neural Machine Translation (NMT). Google updates this NMT model on a semi-regular basis, "when more training data or better techniques become available."
While out-of-scope for the content in this post, if this model doesn't suit your purposes exactly, or you need to finetune this model with your own data, use the AutoML version of the Translation API instead. LLM translations are also now available as well as translations from Google's Gemini. You can read more on accessing all three from Vertex AI in the docs. For introductory purposes, the pre-trained Translation API suffices.
Natural Language API
The Cloud Natural Language API offers multiple natural language features. It can reveal the structure and meaning of text, perform sentiment analysis, content classification, entity extraction, and determine syntactical structure analysis, and it can do so across multiple supported languages.
Similar to the Translation API, the Natural Language API is backed by a pre-trained model from Google. If you need a finetuned model trained on your own data, the original solution was the AutoML Natural Language API (AMLNL). Google then merged AMLNL into the larger Vertex AI platform, and more recently, you would be finetuning the Gemini model from Vertex AI. Those are great options when your needs grow/change... for now use of the pre-trained Natural Language API works.
Prerequisites
As with most Google APIs, there are several distinct steps that you must take before you can use them from your application:
- Install client libraries for your development language(s)
- Enable desired API(s) in developer's console ("DevConsole")
- Get credentials (choose from your options)
These actions are unrelated to each other, meaning you can (almost always) do them in any order, however unlike with other Google APIs, there's a bit more to discuss with regards to credentials, so I'm going to discuss the basics in this section but circle back to it towards the end.
⚠️ ALERT: Billing required (but free?!?)
While many Google APIs are free to use, GCP APIs are not. You must enable billing in the Cloud Console and create a billing account supported by a financial instrument (payment method depends on region/currency) in order to run the code samples. If you're new to GCP, review the billing and onboarding guide.
That said, Google surely wants you to "try before you buy," get an idea of what kinds of services are available, and how to use their APIs, so GCP has a free monthly (or daily) tier for certain products, including the Natural Language and Translation APIs. Review the information on the Natural Language and Translation API pricing pages to understand how their usage is billed. Running the sample scripts in this post a reasonable number of times should not incur billing... just be sure to stay within their corresponding free tier limits.
Install client libraries
Commands are listed below for Python (2 or 3) and Node.js to install the client libraries for both APIs. Pick the one for your development language, or both if you're inclined:
Language | Command |
---|---|
Python |
pip install -U pip google-cloud-language google-cloud-translate # (or pip3 ) |
Node.js | npm i @google-cloud/language @google-cloud/translate |
Confirm all required packages have been installed correctly with the validation commands below... if they complete without error, the installation(s) succeeded and are ready for use:
Language | Command |
---|---|
Python |
python -c "from google.cloud import language, translate" # (or python3 ) |
Node.js | node -e "require('@google-cloud/language'); require('@google-cloud/translate').v2" |
Client libraries for both APIs are available in a variety of languages. If you work in a different development language or want to learn more about the client libraries in general, see the relevant documentation page for the Natural Language API and Translation API.
📝 NOTE: Translation API has two editions |
---|
The Translation API comes in basic and advanced editions. We are using Basic, so you only need to follow the client library instructions for that service. To learn more, see the comparing both editions page in the documentation. |
Enable APIs
All Google APIs must be enabled before they can be used. As mentioned in the earlier sidebar on billing, an active billing account is required, as well as creating a new project or reusing an existing one that has a working billing account attached to it. If you don't, you'll be prompted to do so when enabling billable APIs like Natural Language or Translation.
There are generally three ways of enabling Google APIs:
-
DevConsole manually -- Enable one API at a time by following these steps:
- Go to DevConsole
- Click on Library tab
- Search for "Language", select "Cloud Natural Language API", click Enable API button
- Go back and search for "Translate", pick "Cloud Translation API", and enable that one
- DevConsole link -- You may be new to Google APIs or don't have experience enabling APIs manually in the DevConsole. If this is you, the above steps can be simplified with a single DevConsole link (and click) that enables both APIs.
-
Command-line (
gcloud
) -- For those who prefer working in a terminal, you can enable APIs with a single command in the Cloud Shell or locally on your computer if you installed the Cloud SDK (which includes thegcloud
command-line tool [CLI]) and initialized its use. If this is you, issue the following command to enable both APIs:gcloud services enable language.googleapis.com translate.googleapis.com
. Confirm all the APIs you've enabled using this command:gcloud services list
.
Get credentials
Google APIs require one or more of the following credentials types:
- API keys
- OAuth client IDs
- Service accounts
Which you use depends on which APIs you're trying to access. When using GCP APIs, you're more likely to use service accounts and OAuth client IDs.
Which credentials type?
To learn the code and run the sample scripts from the blog post in your development environment, you're more likely to use the latter (OAuth client ID). When you're ready for production and move away from your "dev box" to a VM or other server in the cloud, you'd then transition to a service account.
To ease the burden on the developer -- no one wants to write different code for different credentials types -- GCP client libraries use an associated library named Application Default Credentials (ADC) for API access. Depending on the execution environment, ADC will point to either service account or OAuth client ID credentials. Create your credentials by going to your terminal and issue the following command: gcloud auth application-default login
(and follow the instructions).
(optional) More about ADC and credentials
The command above asks you for the relevant permissions then obtains user-authorized credentials (OAuth client ID) accessible by the ADC library so you can experiment with the code samples. Learn more about ADC in the docs, starting with the ADC setup page. Also see the page on the different authentication methods and credentials types.
Of the three credentials types, OAuth client IDs and service accounts provide authorized access to APIs and the data behind them. The last credentials type are API keys which provides simple access to APIs (no user permissions needed beyond API key creation). Because API keys impose less friction and provide an easier onboarding process than other credentials types, many developers prefer them, so I'll show you how to access both APIs with API keys in an optional section at the end of this post if it's something that interests you.
OAuth client IDs provide user-authorized access, so they're generally used with APIs that access user-owned data, for example, Google Workspace (GWS) APIs, but not in this case. When using GCP APIs, they provide slightly-elevated access over API keys and are less confusing than learning about service accounts and IAM (identity & access management) permissions.
Learn more about using OAuth client IDs with GWS APIs in a separate post series covering the topic. To learn more about API keys, there's another post series covering that subject. (A third series on service accounts is forthcoming.)
Code samples
If you've completed the above steps (enabled billing & APIs, installed the client libraries, and obtained credentials via the gcloud
command above), you're ready to look at and run code!
Application
The sample script translates text from English to Spanish using the Translation API, and demonstrates two features from the Natural Language API, sentiment analysis and content classification. Sentiment analysis determines whether the provided text is positive or negative (or neither) while content classification attempts to classify the text as belonging to one or more categories. The analyzed text is the same as the text that it translates.
In each script, you'll first see the text followed by the three forms of processing just described above. Let's start with Python.
Python
The main Python script is available in the repo as transnatlang-svcacct-gcp.py
:
'''
transnatlang-svcacct-gcp.py -- GCP Natural Language & Translation APIs demo (Python 2/3-compatible)
'''
from __future__ import print_function
import sys
from google.cloud import language_v1 as language, translate_v2 as translate
# Text to process
_TEXT = '''\
Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.
'''
TEXT = ' '.join(_TEXT.strip().split())
LINE = '-' * 60
# Display text to process
print('TEXT:')
print(TEXT)
print(LINE)
# Create API clients/endpoints
NL = language.LanguageServiceClient()
TL = translate.Client()
# Detect text sentiment
TYPE = 'type_' if sys.version_info.major == 3 else 'type'
BODY = {'content': TEXT, TYPE: language.types.Document.Type.PLAIN_TEXT}
sent = NL.analyze_sentiment(document=BODY).document_sentiment
print('\nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
sent.score, sent.magnitude))
print(LINE)
# Categorize text
print('\nCATEGORIES:')
categories = NL.classify_text(document=BODY).categories
for cat in categories:
print('* %s (%.2f)' % (cat.name[1:], cat.confidence))
print(LINE)
# Translate text to Spanish
TARGET = 'es'
txlns = TL.translate(TEXT, TARGET)
txln = txlns[0] if isinstance(txlns, list) else txlns
print('\nTRANSLATION to %r:\n%s' % (TARGET, txln['translatedText']))
After the import
s and seeing the text that will be processed, the code creates the API clients that are used to access API features. Sentiment analysis comes first, followed by content classification, and finally, the text is translated. This script is architected as Python 2/3-compatible (to help remaining 2.x users migrate to 3.x), and running it with either Python version results in identical output:
$ python transnatlang-svcacct-gcp.py
TEXT:
Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer
Electronics Show. Sundar Pichai said in his keynote that users love their new Android
phones.
------------------------------------------------------------
SENTIMENT: score (0.20), magnitude (0.50)
------------------------------------------------------------
CATEGORIES:
* Internet & Telecom (0.76)
* Computers & Electronics (0.64)
* News (0.56)
------------------------------------------------------------
TRANSLATION to 'es':
Google, con sede en Mountain View, presentó el nuevo teléfono Android en el Consumer
Electronics Show. Sundar Pichai dijo en su discurso de apertura que a los usuarios
les encantan sus nuevos teléfonos Android.
'''
Node.js
The near-equivalent Node.js script can be accessed as transnatlang-svcacct-gcp.js
in the repo:
// transnatlang-svcacct-gcp.js -- Cloud Natural Language & Translation APIs demo
const LanguageClient = require('@google-cloud/language');
const {Translate} = require('@google-cloud/translate').v2;
// Text to process
const TEXT = `Google, headquartered in Mountain View, unveiled
the new Android phone at the Consumer Electronics Show. Sundar
Pichai said in his keynote that users love their new Android
phones.`.replace(/\n/g, ' ');
const LINE = '-'.repeat(60);
const BODY = {content: TEXT, type: 'PLAIN_TEXT'};
// Create API clients/endpoints
const NL = new LanguageClient.LanguageServiceClient();
const TL = new Translate();
// Detect text sentiment
async function sentAnalysis() {
const [result] = await NL.analyzeSentiment({document: BODY});
const sent = result.documentSentiment;
console.log(`\nSENTIMENT: score (${sent.score.toFixed(2)}), magnitude (${sent.magnitude.toFixed(2)})`);
console.log(LINE);
}
// Categorize text
async function categorizeText() {
console.log('\nCATEGORIES:');
const [result] = await NL.classifyText({document: BODY});
const categories = result.categories;
for (let cat of categories) {
console.log(`* ${cat.name.slice(1)} (${cat.confidence.toFixed(2)})`);
};
console.log(LINE);
}
// Translate text to Spanish
async function translateText() {
const TARGET = 'es';
const [txlns] = await TL.translate(TEXT, TARGET);
let txln = Array.isArray(txlns) ? txlns[0] : txlns;
console.log(`\nTRANSLATION to "${TARGET}":\n${txln}`);
}
// Display text to process
console.log('TEXT:');
console.log(TEXT);
console.log(LINE);
// Execute all
sentAnalysis()
.then(categorizeText)
.then(translateText)
.catch(console.error);
Executing the Node.js script results in the same output as the Python version:
$ node transnatlang-svcacct-gcp.js
TEXT:
Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer
Electronics Show. Sundar Pichai said in his keynote that users love their new Android
phones.
------------------------------------------------------------
SENTIMENT: score (0.20), magnitude (0.50)
------------------------------------------------------------
CATEGORIES:
* Internet & Telecom (0.76)
* Computers & Electronics (0.64)
* News (0.56)
------------------------------------------------------------
TRANSLATION to "es":
Google, con sede en Mountain View, presentó el nuevo teléfono Android en el Consumer
Electronics Show. Sundar Pichai dijo en su discurso de apertura que a los usuarios
les encantan sus nuevos teléfonos Android.
For those who prefer a modern ECMAScript module, here's the equivalent .mjs
file, in the repo as transnatlang-svcacct-gcp.mjs
:
// transnatlang-svcacct-gcp.mjs -- Cloud Natural Language & Translation APIs demo
import LanguageClient from '@google-cloud/language';
import {v2} from '@google-cloud/translate';
// Text to process
const TEXT = `Google, headquartered in Mountain View, unveiled
the new Android phone at the Consumer Electronics Show. Sundar
Pichai said in his keynote that users love their new Android
phones.`.replace(/\n/g, ' ');
const LINE = '-'.repeat(60);
const BODY = {content: TEXT, type: 'PLAIN_TEXT'};
// Create API clients/endpoints
const NL = new LanguageClient.LanguageServiceClient();
const TL = new v2.Translate();
// Detect text sentiment
async function sentAnalysis() {
const [result] = await NL.analyzeSentiment({document: BODY});
const sent = result.documentSentiment;
console.log(`\nSENTIMENT: score (${sent.score.toFixed(2)}), magnitude (${sent.magnitude.toFixed(2)})`);
console.log(LINE);
}
// Categorize text
async function categorizeText() {
console.log('\nCATEGORIES:');
const [result] = await NL.classifyText({document: BODY});
const categories = result.categories;
for (let cat of categories) {
console.log(`* ${cat.name.slice(1)} (${cat.confidence.toFixed(2)})`);
};
console.log(LINE);
}
// Translate text to Spanish
async function translateText() {
const TARGET = 'es';
const [txlns] = await TL.translate(TEXT, TARGET);
let txln = Array.isArray(txlns) ? txlns[0] : txlns;
console.log(`\nTRANSLATION to "${TARGET}":\n${txln}`);
}
// Display text to process
console.log('TEXT:');
console.log(TEXT);
console.log(LINE);
// Execute all
sentAnalysis()
.then(categorizeText)
.then(translateText)
.catch(console.error);
Take my word for it that its output is identical to the CommonJS version. Play around with the code, try using other languages with both the Translation and Natural Language APIs, modify the code with different data, or experiment with other API features as desired. Regardless, you now have basic working knowledge of both APIs. (Skip the next section if you're not interested in using API keys with these APIs.)
(optional) Python: API key access
Now for some stuff you definitely won't find much information on or code samples in the official or any Google documentation.
Background
As mentioned earlier, use of GCP APIs generally leverage OAuth client IDs or service accounts. API keys are much easier to use and implement in applications though. Unfortunately API keys are easy to lose/leak, so many consider them less secure than the other credentials types. (Also see sidebar below.)
Generally API keys are used with Google APIs that access "public data," meaning data not owned by a (human or robot) user or belonging to a project. This includes text strings sent to APIs to be processed, analyzed, or translated, and both APIs today covered are part of that group of GCP APIs accepting API keys. (More information can be found in the post series on API keys.
Another hurdle using API keys is that because they're not typical nor recommended for GCP APIs, support for API keys isn't always complete. While the Python Natural Language API client library supports API keys, the Python Translation API client library does not. Both of these are considered product client libraries because there is one per product.
Developers need to take a leap of faith here because you're about to be exposed to something you haven't seen before: the older but broader platform Google API client library for Python. This is the same client library used to access GWS APIs along with OAuth client IDs, so if you've ever used GWS APIs, this won't take you by surprise like it will developers who have only used GCP APIs and product client libraries.
In order to keep the code consistent and not mix styles of different client libraries, I need to switch to the sample app to use the platform client library and the older auth libraries that go with it. If you're willing to take this leap, then continue onto creating an API key to call both language-oriented APIs with.
📝 NOTE: Node.js API key support unknown |
---|
I have yet to try using API keys in Node.js with these APIs, whether product or platform client libraries. If you have a working example, please file an issue and submit a PR. |
Get credentials (API key)
Follow these steps to create an API key:
- Go to the DevConsole credentials page
- Click + Create Credentials at the top
- Select API key and wait a few seconds for completion
- Copy and save API key as a variable
API_KEY = '<YOUR-API-KEY>'
tosettings.py
for Python (may need to refresh credentials page and click Show key to see it)
⚠️ WARNING: Keep API keys secure |
---|
Storing API keys in files (or hard-coding them for use in actual code or even assigning to environment variables) is for prototyping and learning purposes only. When going to production, put them in environment variables or in a secrets manager. Files like settings.py or .env containing API keys are susceptible. Under no circumstances should you upload files like those to any public or private repo, have sensitive data like that in TerraForm config files, add such files to Docker layers, etc., as once your API key leaks, everyone in the world can use it. |
Install platform client library
Follow these steps in your terminal to install and validate the platform client library:
-
Install:
pip install -U pip google-api-python-client
# (orpip3
) -
Validate:
python -c "import googleapiclient"
# (orpython3
)
Sample app using API keys
Assuming you've already enabled both APIs, you're good-to-go. The API key alternate version of the Python script shown below is available as transnatlang-apikey-old.py
in the repo:
'''
transnatlang-apikey-old.py -- GCP Natural Language & Translation APIs demo (Python 2/3-compatible)
'''
from __future__ import print_function
from googleapiclient import discovery
from settings import API_KEY
# Text to process
_TEXT = '''\
Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.
'''
TEXT = ' '.join(_TEXT.strip().split())
LINE = '-' * 60
# Display text to process
print('TEXT:')
print(TEXT)
print(LINE)
# Create API clients/endpoints
NL = discovery.build('language', 'v1', developerKey=API_KEY)
TL = discovery.build('translate', 'v2', developerKey=API_KEY)
# Detect text sentiment
BODY = {'content': TEXT, 'type': 'PLAIN_TEXT'}
sent = NL.documents().analyzeSentiment(
body={'document': BODY}).execute().get('documentSentiment')
print('\nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
sent['score'], sent['magnitude']))
print(LINE)
# Categorize text
print('\nCATEGORIES:')
categories = NL.documents().classifyText(
body={'document': BODY}).execute().get('categories')
for cat in categories:
print('* %s (%.2f)' % (cat['name'][1:], cat['confidence']))
print(LINE)
# Translate text to Spanish
TARGET = 'es'
txln = TL.translations().list(
q=TEXT, target=TARGET).execute().get('translations')[0]
print('\nTRANSLATION to %r:\n%s' % (TARGET, txln['translatedText']))
As you can guess, the output is identical to the earlier versions above, so we won't show it here. More important is for readers to see how the code differs between both client library styles:
Aside from the obvious differences in client library nomenclature and use of different credentials, API usage is fairly similar. Not visually obvious is that the older platform client library calls the REST versions of the GCP APIs whereas the newer product client libraries call the gRPC versions which generally perform better... yet another reason why the product client libraries are always recommended.
(optional) Modern Python 3 version
Another optional version to look at is a modern Python 3-only alternative replete with function & variable typing annotations, f-strings, and asynchronous (but operating in a somewhat synchronous manner) in nature. It closely resembles the JS versions because those are already async.
'''
transnatlang-svcacct-gcp-async.py -- GCP Natural Language & Translation APIs demo (Python 3-only)
'''
import asyncio
from google.cloud import language_v1 as language, translate_v2 as translate
# Text to process
_TEXT: str = '''\
Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.
'''
TEXT: str = ' '.join(_TEXT.strip().split())
LINE: str = '-' * 60
# Create API clients/endpoints
NL: language.LanguageServiceClient = language.LanguageServiceClient()
TL: translate.client.Client = translate.Client()
BODY: dict = {'content': TEXT, 'type_': language.types.Document.Type.PLAIN_TEXT}
async def sentAnalysis() -> None:
'Detect text sentiment'
sent = NL.analyze_sentiment(document=BODY).document_sentiment
print(f'\nSENTIMENT: score ({sent.score:.2f}), magnitude ({sent.magnitude:.2f}')
async def categorizeText() -> None:
'Categorize text'
print('\nCATEGORIES:')
categories = NL.classify_text(document=BODY).categories
for cat in categories:
print(f'* {cat.name[1:]} ({cat.confidence:.2f})')
async def translateText() -> None:
'Translate text to Spanish'
TARGET: str = 'es'
txlns = TL.translate(TEXT, TARGET)
txln = txlns[0] if isinstance(txlns, list) else txlns
print(f"\nTRANSLATION to '{TARGET}':\n{txln['translatedText']}")
async def main() -> None:
'Execute all'
print('TEXT:')
print(TEXT) # Display text to process
print(LINE)
await sentAnalysis() # Detect text sentiment
print(LINE)
await categorizeText() # Categorize text
print(LINE)
await translateText() # Translate text to Spanish
asyncio.run(main())
The output here is the same as all other versions.
Summary
When it comes to getting up to speed using AI/ML to process text, whether translations or NLU, you can train your own models or finetune existing open source or proprietary models, but if you're new to AI/ML or don't need more complex features, the getting started process is accelerated when using APIs backed by single-task pre-trained models. GCP provides the Translation and Natural Language APIs that meet this introductory need. The purpose of the introductory samples in Python and Node.js are meant to help you get up-to-speed quickly and have working code you can experiment with.
If you find any errors or have suggestions on content you'd like to see in future posts, be sure to leave a comment below, and if your organization needs help integrating Google technologies via its APIs, reach out to me by submitting a request at https://cyberwebconsulting.com. Lastly, below are links relevant to the content in this post for further exploration.
References
This post covered quite a bit, so there is a good amount of documentation to link you to:
Blog post code samples
GCP/Cloud Natural Language API
- Home page
- Pricing page
- Product client libraries
- Sentiment analysis
- Content classification
- Supported languages
- AutoML version of the Natural Language API
- Client library supports API keys
GCP/Cloud Translation API and Google Translate
- Home page
- Pricing page
- Product client libraries
- Supported languages
- Basic edition
- Advanced edition
- Comparing Basic vs. Advanced editions
- AutoML version of the Translation API
- Google Translate
Google AI, Gemini API, and GCP Vertex AI platform
- Google AI
- GCP Vertex AI
- Translating text with Vertex AI
- Migrating from AutoML to Vertex AI
- Finetuning Gemini
Google APIs, ADC/credentials, Cloud/DevConsole, etc.
- Cloud console
- DevConsole/API Manager
- En-/disable APIs
- Create credentials
- Cloud console Billing page
- GCP payment methods based on region/currency
- GCP billing and onboarding guide
- GCP "Always Free" tier
- Creating new projects
- Cloud SDK and
gcloud
installation - GCP ADC setup page
- Authentication & credentials guide
- GCP APIs accepting API keys
- Platform Google API client libraries
Similar blog content by the author
- Google API credentials: API keys
- Google API credentials: OAuth client IDs
- Generative AI with the Gemini API
- Image archive sample app (uses Google Sheets & Drive, Cloud Storage & Vision APIs)
- Explore GCP serverless platforms with a nebulous app (uses GCP serverless platforms & Cloud Translation API; also video 1 and video 2)
Technical session videos by the author (all LONG)
- Easy path to machine learning (60+ mins; introduces several GCP AI/ML APIs)
- Hands-on intro to AI/ML with Python (75+ mins; uses Cloud Vision API and executes its codelab tutorial)
- Google APIs 102: GCP vs. non-GCP APIs" video (90+ mins; covers platform vs. product client libraries, credentials types, tour of various GCP & non-GCP Google APIs)
- Easy path to machine learning (90+ mins; introduces several GCP AI/ML APIs)
- GWS APIs, AI/ML APIs, and GCP serverless workshop (~4 hours[!]; made up of 3 technical sessions covering each topic)
WESLEY CHUN, MSCS, is a Google Developer Expert (GDE) in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling "Core Python" series, co-author of "Python Web Development with Django", and has written for Linux Journal & CNET. He runs CyberWeb specializing in GCP & GWS APIs and serverless platforms, Python & App Engine migrations, and Python training & engineering. Wesley was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for serverless migration and GWS developers. He holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide at conferences, user group events, and universities. Follow he/him @wescpy & his technical blog. Find this content useful? Contact CyberWeb for professional services or buy him a coffee (or tea)!
Top comments (0)