AEM + AI Content Generation Workflows
AEM + AI Content Generation Workflows: Automate Content at Scale
Introduction
Content teams using AEM are under constant pressure to produce more content, faster, and in multiple languages. With the rise of LLMs like OpenAI's GPT-4 and Anthropic's Claude, it is now practical to integrate AI content generation directly into AEM workflows — automating first drafts, metadata, translations, and content fragment creation.
In this post, we'll build a practical AEM Workflow that calls an external AI API, generates content, and writes it back into AEM Content Fragments.
Architecture
AEM Workflow Trigger (Page / Asset event)
↓
Custom Workflow Process Step (OSGi)
↓
HTTP Call → OpenAI / Claude API
↓
Parse AI Response
↓
Write to AEM Content Fragment / Page Properties
Step 1: Store Your API Key Securely in AEM
Never hardcode API keys. Store them as Cloud Manager environment variables and read them via OSGi config.
OSGi Config — AI Service Configuration
<!-- /apps/your-project/config.prod/com.yourproject.ai.AIServiceConfig.xml -->
<?xml version="1.0" encoding="UTF-8"?>
<jcr:root xmlns:sling="http://sling.apache.org/jcr/sling/1.0"
xmlns:jcr="http://www.jcp.org/jcr/1.0"
jcr:primaryType="sling:OsgiConfig"
ai.api.endpoint="https://api.openai.com/v1/chat/completions"
ai.api.key="$[env:OPENAI_API_KEY]"
ai.model="gpt-4o"
ai.max.tokens="{Long}1000"/>
OSGi Service to Read Config
@ObjectClassDefinition(name = "AI Service Configuration")
public @interface AIServiceConfig {
String ai_api_endpoint() default "https://api.openai.com/v1/chat/completions";
String ai_api_key() default "";
String ai_model() default "gpt-4o";
long ai_max_tokens() default 1000;
}
Step 2: Build the AI Client Service
@Component(service = AIContentService.class)
@Designate(ocd = AIServiceConfig.class)
public class AIContentServiceImpl implements AIContentService {
private static final Logger log = LoggerFactory.getLogger(AIContentServiceImpl.class);
private AIServiceConfig config;
@Activate
protected void activate(AIServiceConfig config) {
this.config = config;
}
@Override
public String generateContent(String prompt) {
try {
HttpClient client = HttpClient.newHttpClient();
// Build request JSON
String requestBody = buildRequestBody(prompt);
HttpRequest request = HttpRequest.newBuilder()
.uri(URI.create(config.ai_api_endpoint()))
.header("Authorization", "Bearer " + config.ai_api_key())
.header("Content-Type", "application/json")
.POST(HttpRequest.BodyPublishers.ofString(requestBody))
.timeout(Duration.ofSeconds(30))
.build();
HttpResponse<String> response = client.send(
request, HttpResponse.BodyHandlers.ofString());
if (response.statusCode() == 200) {
return parseResponse(response.body());
} else {
log.error("AI API error: {}", response.body());
}
} catch (Exception e) {
log.error("Failed to call AI API", e);
}
return null;
}
private String buildRequestBody(String prompt) {
return "{"
+ "\"model\": \"" + config.ai_model() + "\","
+ "\"messages\": ["
+ " {\"role\": \"system\", \"content\": \"You are an expert AEM content writer.\"},"
+ " {\"role\": \"user\", \"content\": \"" + escapeJson(prompt) + "\"}"
+ "],"
+ "\"max_tokens\": " + config.ai_max_tokens()
+ "}";
}
private String parseResponse(String responseBody) throws Exception {
// Simple JSON parse — use Gson/Jackson in production
int start = responseBody.indexOf("\"content\":\"") + 11;
int end = responseBody.indexOf("\"", start);
return responseBody.substring(start, end);
}
private String escapeJson(String input) {
return input.replace("\"", "\\\"").replace("\n", "\\n");
}
}
Step 3: Create the Workflow Process Step
@Component(
service = WorkflowProcess.class,
property = {
"process.label=AI Content Generation Step",
"service.description=Generates content using AI and writes to Content Fragment"
}
)
public class AIContentGenerationStep implements WorkflowProcess {
@Reference
private AIContentService aiContentService;
@Reference
private ResourceResolverFactory resolverFactory;
@Override
public void execute(WorkItem workItem, WorkflowSession wfSession,
MetaDataMap metaDataMap) throws WorkflowException {
String payloadPath = workItem.getWorkflowData()
.getPayload().toString();
// Get topic from workflow metadata or page property
String topic = metaDataMap.get("topic", "AEM Best Practices");
String prompt = "Write a 200-word introduction about: " + topic
+ ". Use a professional tone suitable for a developer blog.";
String generatedContent = aiContentService.generateContent(prompt);
if (generatedContent != null) {
writeToContentFragment(payloadPath, generatedContent);
}
}
private void writeToContentFragment(String fragmentPath, String content) {
Map<String, Object> param = new HashMap<>();
param.put(ResourceResolverFactory.SUBSERVICE, "workflowService");
try (ResourceResolver resolver = resolverFactory
.getServiceResourceResolver(param)) {
Resource fragmentResource = resolver.getResource(
fragmentPath + "/jcr:content/data/master");
if (fragmentResource != null) {
ModifiableValueMap props = fragmentResource
.adaptTo(ModifiableValueMap.class);
props.put("bodyText", content);
props.put("ai_generated", true);
props.put("ai_generated_date",
Calendar.getInstance());
resolver.commit();
}
} catch (Exception e) {
// handle exception
}
}
}
Step 4: Register Workflow Model in JCR
<!-- /conf/your-project/settings/workflow/models/ai-content-generation/jcr:content -->
<jcr:root xmlns:jcr="http://www.jcp.org/jcr/1.0"
xmlns:cq="http://www.day.com/jcr/cq/1.0"
jcr:primaryType="cq:WorkflowModel"
jcr:title="AI Content Generation Workflow"
description="Generates content using LLM API and saves to Content Fragment">
<flow jcr:primaryType="cq:WorkflowNode"
type="START"/>
<aiStep jcr:primaryType="cq:WorkflowNode"
type="PROCESS"
process="com.yourproject.workflow.AIContentGenerationStep">
<metaData jcr:primaryType="nt:unstructured"
topic="AEM Cloud Migration"/>
</aiStep>
<end jcr:primaryType="cq:WorkflowNode"
type="END"/>
</jcr:root>
Step 5: Trigger Workflow via REST API
You can trigger this workflow programmatically (e.g., from a CI/CD pipeline or external system):
curl -X POST \
"http://localhost:4502/api/workflow/instances" \
-u admin:admin \
-d "model=/conf/your-project/settings/workflow/models/ai-content-generation" \
-d "payloadType=JCR_PATH" \
-d "payload=/content/dam/your-project/content-fragments/article-1"
Bonus: Auto-Generate SEO Metadata with AI
Add a second workflow step that generates meta title and description:
String seoPrompt = "Generate an SEO meta title (max 60 chars) and "
+ "meta description (max 155 chars) for an article about: " + topic
+ ". Return as JSON: {\"title\": \"\", \"description\": \"\"}";
String seoJson = aiContentService.generateContent(seoPrompt);
// Parse and write to page jcr:content/metadata
Key Takeaways
- Always store API keys in Cloud Manager environment variables, never in code.
- Add a
ai_generatedflag to every node written by AI — useful for auditing and compliance. - Use workflow metadata to pass dynamic prompts, so the same workflow can serve multiple content types.
- Set a reasonable timeout (30s) on your HTTP client — LLM APIs can be slow under load.
- Consider a retry mechanism with exponential backoff for production workflows.
What's Next?
Next up: Edge Delivery Services + AI — how to combine AEM's modern delivery layer with AI-powered content generation and personalization.
Published on aemrules.com | Tags: AEM, AI, OpenAI, Content Generation, AEM Workflow, Content Fragments, LLM
Comments
Post a Comment