Adobe Sensei GenAI & AEM
# How Adobe Sensei GenAI is Transforming AEM — A Practical Guide
## Introduction
Adobe Experience Manager (AEM) has always been a powerful content platform, but with the deep integration of **Adobe Sensei GenAI**, it is evolving from a content management system into an intelligent content engine. Whether you are an AEM Developer, Architect, or DevOps engineer, understanding how Sensei GenAI plugs into AEM will help you build smarter, faster, and more personalized digital experiences.
In this post, we'll walk through what Adobe Sensei GenAI is, how it integrates with AEM Assets and AEM Sites, and look at practical examples of how to leverage it.
---
## What is Adobe Sensei GenAI?
Adobe Sensei is Adobe's AI and machine learning framework. **Sensei GenAI** is its generative AI layer — built on top of large language models (LLMs) and image generation models — and is natively embedded into Adobe's product suite including AEM, Adobe Analytics, and Workfront.
In the context of AEM, Sensei GenAI powers:
- **Smart Tagging** in AEM Assets (DAM)
- **Auto-captioning** for images and videos
- **Content Variations** generation for AEM Sites
- **Smart Crop** for responsive image renditions
- **AI-driven Search** across the DAM
---
## 1. Smart Tagging in AEM Assets
Smart Tags use a machine learning model trained on Adobe's Sensei framework to automatically tag assets uploaded to the DAM.
### How it Works
When an asset is uploaded, AEM sends it to the Sensei Smart Tagging service, which returns a set of predicted tags. These are written back to the asset metadata under `dam:suggestedTags`.
### Enable Smart Tags — OSGi Configuration
Navigate to:
```
/system/console/configMgr → Adobe CQ DAM Smart Tags Workflow Process
```
Ensure the Smart Tag workflow is linked to the **DAM Update Asset** workflow:
```xml
<!-- /apps/your-project/config/com.adobe.cq.dam.smarttagging.impl.SmartTagsManagerImpl.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0"
xmlns:jcr="http://www.jcp.org/jcr/1.0"
jcr:primaryType="sling:OsgiConfig"
trainedTagsEnabled="{Boolean}true"
assetTagsMaxLength="{Long}25"
minConfidence="{Double}0.5"/>
```
### Reading Smart Tags via JCR API (Java)
```java
@Component(service = SmartTagReader.class)
public class SmartTagReader {
@Reference
private ResourceResolverFactory resolverFactory;
public List<String> getSmartTags(String assetPath) {
List<String> tags = new ArrayList<>();
Map<String, Object> param = new HashMap<>();
param.put(ResourceResolverFactory.SUBSERVICE, "smartTagService");
try (ResourceResolver resolver = resolverFactory.getServiceResourceResolver(param)) {
Resource assetResource = resolver.getResource(assetPath + "/jcr:content/metadata");
if (assetResource != null) {
ValueMap vm = assetResource.adaptTo(ValueMap.class);
String[] smartTags = vm.get("dam:suggestedTags", String[].class);
if (smartTags != null) {
tags = Arrays.asList(smartTags);
}
}
} catch (LoginException e) {
// handle exception
}
return tags;
}
}
```
---
## 2. Content Variations with Sensei GenAI (AEM Sites)
**Content Variations** is a Sensei GenAI-powered feature in AEM Sites that allows authors to generate multiple versions of copy (headlines, descriptions, CTAs) using AI — directly from the AEM UI.
### How it Works
Authors open the **Content Variations** panel in AEM Sites, provide a prompt or seed content, and Sensei GenAI returns multiple variations. These are powered by an LLM (backed by Azure OpenAI in Adobe's infrastructure).
### Calling Content Variations API Programmatically
Adobe exposes this via the **Adobe IMS + AEM Cloud API**. Here's how to call it from a custom AEM service:
```java
@Component(service = ContentVariationService.class)
public class ContentVariationService {
private static final String CONTENT_VARIATIONS_ENDPOINT =
"https://experience.adobe.io/genai/content-variations/v1/generate";
public String generateVariation(String prompt, String imsToken) throws IOException {
HttpClient client = HttpClient.newHttpClient();
String requestBody = "{"
+ "\"prompt\": \"" + prompt + "\","
+ "\"locale\": \"en-US\","
+ "\"count\": 3"
+ "}";
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(CONTENT_VARIATIONS_ENDPOINT))
.header("Authorization", "Bearer " + imsToken)
.header("Content-Type", "application/json")
.header("x-api-key", "your-api-key")
.POST(HttpRequest.BodyPublishers.ofString(requestBody))
.build();
HttpResponse<String> response = client.send(request,
HttpResponse.BodyHandlers.ofString());
return response.body();
}
}
```
**Sample Response:**
```json
{
"variations": [
{ "text": "Discover a smarter way to manage your content pipeline." },
{ "text": "Transform your digital experience with AI-powered content." },
{ "text": "Publish faster, personalize better — powered by Adobe Sensei." }
]
}
```
---
## 3. Smart Crop — AI-Driven Renditions
Smart Crop uses Sensei to automatically detect the focal point of an image and crop it for different aspect ratios (desktop, tablet, mobile) without losing the subject.
### Configure Smart Crop in Image Profile
```json
{
"smartCropRenditions": [
{ "name": "Desktop", "width": 1920, "height": 600 },
{ "name": "Tablet", "width": 768, "height": 400 },
{ "name": "Mobile", "width": 375, "height": 300 }
],
"enableSmartCrop": true
}
```
Apply this Image Profile to a DAM folder via:
```
AEM Assets → Folder Properties → Image Profile → Select your profile
```
### Using Smart Crop Rendition in HTL (Sightly)
```html
<sly data-sly-use.image="core/wcm/components/image/v3/image/image.js"/>
<img src="${image.src @ smartcrop='Desktop'}"
alt="${image.alt}"
loading="lazy" />
```
---
## 4. AI-Powered Search in DAM
Sensei enhances AEM's Omnisearch with **visual similarity search** and **semantic search**.
### Enable Visual Search — OSGi Config
```xml
<!-- com.adobe.cq.dam.visual.similarity.impl.VisualSimilaritySearchServiceImpl.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0"
xmlns:jcr="http://www.jcp.org/jcr/1.0"
jcr:primaryType="sling:OsgiConfig"
enabled="{Boolean}true"
similarityThreshold="{Double}0.75"
maxResults="{Long}20"/>
```
### Trigger Visual Search via API
```bash
curl -X POST \
"https://your-aem-instance/bin/dam/visual-search" \
-H "Authorization: Bearer <token>" \
-F "asset=@/path/to/reference-image.jpg" \
-F "limit=10"
```
---
## Architecture Overview
```
Author → Upload Asset to DAM
↓
DAM Update Asset Workflow
↓
Sensei Smart Tagging API ←→ Adobe IMS Auth
↓
Tags written to JCR metadata
↓
Available in Search / Personalization Engine
```
---
## Key Takeaways
- Adobe Sensei GenAI is **natively embedded** in AEM — no third-party AI setup needed for core features.
- Smart Tags, Smart Crop, and Content Variations are the **three most impactful** AI features for AEM teams today.
- All Sensei services communicate via **Adobe IMS tokens** — ensure your service users and API keys are correctly configured in Cloud Manager environment variables.
- Smart Tag confidence threshold (`minConfidence`) should be tuned based on your content type — `0.5` works well for generic assets, go higher (`0.7+`) for brand-specific content.
---
## What's Next?
In the next post, we'll explore **AEM + AI Content Generation Workflows** — how to connect OpenAI / Claude APIs directly into AEM workflows to automate content creation at scale.
---
*Published on aemrules.com | Tags: AEM, Adobe Sensei, GenAI, AEM Assets, Smart Tags, Content Variations, AEM Sites*
Comments
Post a Comment