Now that we have a solid foundation on how to build and test docker containers we can move on to the next step and that is to build a docker container that will allow us to test our AEM Dispatcher configuration.
Dispatcher SDK is shipped by Adobe though Adobe Experience Manager as a Cloud Service SDK and it is a docker container that allows you to test your dispatcher configuration.
The only non-starter with this container is that it is not available on docker hub and you need to build it yourself manually 👎💩.
Running a container to test your dispatcher configuration must be as simple as running following command:
docker run -it --rm -v ${PWD}/dispatcher/src:/mnt/dev/src --name dispatcher -p 8080:80 -e AEM_PORT=4503 -e AEM_HOST=host.docker.internal aemdesign/dispatcher-sdk
or on Linux
docker run -it --rm -v `pwd`/dispatcher/src:/mnt/dev/src --name dispatcher -p 8080:80 -e AEM_PORT=4503 -e AEM_HOST=host.docker.internal aemdesign/dispatcher-sdk
This will map source from your dispatcher configuration to the container and start the container. Additionally it will expose port 8080 on your host machine and map it to port 80 on the container. Furthermore in the console output you will output of the validation of your dispatcher configuration. This is critical to make sure that your dispatcher configuration is valid and will work as expected once you push this to Adobe Cloud Manager. This dispatcher image has debug set to trace so you will see all the details of how your dispatcher rule apply.
You can checkout the source for this container here aem-design/docker-dispatcher-sdk. Feel free to contribute to this project.
To integrate this with AEM you need to make sure that you have a dispatcher configuration that is valid and that you can run it locally. Once you have that you can use the following docker-compose file to start your AEM instance and the dispatcher SDK.
This is should be your typical AEM docker-compose file with the addition of the dispatcher SDK container. This allows your team to use same configuration for local development and testing. This will also allow testing of SSL configurations and other dispatcher configurations that are not possible to test locally universaly across all type of OS. This allows you to test same way on all OS’s.
version: "3.9"
services:
author:
image: aemdesign/aem:sdk-2023.3.11382
hostname: author
healthcheck:
test: curl -u admin:admin --header Referer:localhost --silent --connect-timeout 5 --max-time 5 http://localhost:8080/system/console/bundles.json | grep -q \"state\":\"Installed\" && exit 1 || exit 0
interval: 10s
timeout: 10s
retries: 20
start_period: 1s
ports:
- 4502:8080
- 30303:58242
environment:
- AEM_RUNMODE=-Dsling.run.modes=author,crx3,crx3tar,localdev,nosamplecontent
- AEM_JVM_OPTS=-server -Xms248m -Xmx4524m -XX:MaxDirectMemorySize=256M -XX:+CMSClassUnloadingEnabled -Djava.awt.headless=true -Dorg.apache.felix.http.host=0.0.0.0 -Xdebug -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:58242
volumes:
- author-data:/aem/crx-quickstart/repository
labels:
# note that you want this frontened to match the last. otherwise it will match login.${HOST_DOMAIN}"
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.author.rule: "Host(`author.localhost`)"
traefik.http.routers.author.entrypoints: web
traefik.http.routers.author_https.rule: "Host(`author.localhost`)"
traefik.http.routers.author_https.tls: true
traefik.http.routers.author_https.entrypoints: websecure
traefik.http.services.author.loadbalancer.passHostHeader: true
networks:
- author-network
- publish-network
- dispatcher-network
- internal
- default
publish:
image: aemdesign/aem:sdk-2023.3.11382
hostname: publish
healthcheck:
test: curl -u admin:admin --header Referer:localhost --silent --connect-timeout 5 --max-time 5 http://localhost:8080/system/console/bundles.json | grep -q \"state\":\"Installed\" && exit 1 || exit 0
interval: 10s
timeout: 10s
retries: 20
start_period: 30s
ports:
- 4503:8080
- 30304:58242
environment:
- AEM_RUNMODE=-Dsling.run.modes=publish,crx3,crx3tar,localdev,nosamplecontent
- AEM_JVM_OPTS=-server -Xms248m -Xmx1524m -XX:MaxDirectMemorySize=256M -XX:+CMSClassUnloadingEnabled -Djava.awt.headless=true -Dorg.apache.felix.http.host=0.0.0.0 -Xdebug -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:58242
labels:
# note that you want this frontend to match the last. otherwise, it will match login.${HOST_DOMAIN}"
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.publish.rule: "Host(`publish.localhost`)"
traefik.http.routers.publish.entrypoints: web
traefik.http.routers.publish_https.rule: "Host(`publish.localhost`)"
traefik.http.routers.publish_https.tls: true
traefik.http.routers.publish_https.entrypoints: websecure
traefik.http.services.publish.loadbalancer.passHostHeader: true
volumes:
- publish-data:/aem/crx-quickstart/repository
networks:
- publish-network
- internal
- default
dispatcher:
image: aemdesign/dispatcher-sdk
hostname: dispatcher
ports:
- 8081:80
environment:
- AEM_PORT=4503
- AEM_HOST=host.docker.internal
- DISP_LOG_LEVEL=trace1 #debug
labels:
# note that you want this frontend to match the last. otherwise, it will match login.${HOST_DOMAIN}"
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.dispatcher.rule: "Host(`dispatcher.localhost`)"
traefik.http.routers.dispatcher.entrypoints: web
traefik.http.routers.dispatcher_https.rule: "Host(`dispatcher.localhost`)"
traefik.http.routers.dispatcher_https.tls: true
traefik.http.routers.dispatcher_https.entrypoints: websecure
traefik.http.services.dispatcher.loadbalancer.passHostHeader: true
volumes:
- ./dispatcher/src/:/mnt/dev/src/
networks:
- publish-network
- dispatcher-network
- internal
- default
# b -(https)-> t(cert) -(http)-> d -(http)-> p
traefik:
image: traefik
environment:
- TZ=Australia/Sydney
security_opt:
- no-new-privileges:true
restart: "always"
command:
- "--log.level=ERROR"
- "--accesslog=true"
- "--api.insecure=true" # Don't do that in production!
- "--api.dashboard=true" # Don't do that in production!
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--global.sendAnonymousUsage=true"
# Entrypoints for HTTP, HTTPS, and NX (TCP + UDP)
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
# Manual keys
- "--providers.file.directory=/etc/traefik/dynamic_conf"
- "--providers.file.watch=true"
labels:
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.dashboard.rule: "Host(`traefik.localhost`) && (PathPrefix(`/api`) || PathPrefix(`/dashboard`))"
traefik.http.routers.dashboard.entrypoints: websecure
traefik.http.routers.dashboard.tls: true
traefik.http.routers.dashboard.service: api@internal
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
# Persist certificates, so we can restart as often as needed
- ./services/traefik/certs:/letsencrypt
- ./services/traefik/config/dynamic:/etc/traefik/dynamic_conf/conf.yml:ro
depends_on:
createcert:
condition: service_completed_successfully
networks:
- author-network
- publish-network
- dispatcher-network
- internal
- default
createcert:
image: aemdesign/mkcert:latest
environment:
- TZ=Australia/Sydney
command:
- "test -f mkcert.key && exit 0; mkcert -install && mkcert -key-file mkcert.key -cert-file mkcert.pem -client author.localhost publish.localhost dispatcher.localhost localhost 127.0.0.1 ::1 local.aem.design && openssl pkcs12 -export -out mkcert.pfx -in mkcert.pem -inkey mkcert.key -certfile rootCA.pem -passout pass:123"
volumes:
- ./services/traefik/certs:/certs
networks:
default:
internal:
author-network:
publish-network:
dispatcher-network:
volumes:
author-data:
publish-data:
dispatcher-data:
This will create the following containers:
Hope this helps.
I hope you enjoyed this guide. If you have any questions or comments, feel free to contact me. I will be happy to help.
Let me know what you think and don’t forget to tell your friends.
]]>This article is a follow-up to a series of articles on Docker Docker, Containers Everywhere, Docker Automation Testing, and Docker Dispatcher SDK 🔧💪😎👍. Further building on the solid foundation, it’s time to add services that will allow you to provide value to each existing and new member.
In the last most recent article Docker Dispatcher SDK 🔧💪😎👍, we looked at a docker-compose file that allowed us to create a dispatcher container. This compose file was configured to work with the author and publish containers. This is a great start, and now we can add additional services that you have been missing all this time.
TL;DR The best way to showcase this is to level up a freshly generated AEM SaaS project using Adobe’s AEM Project Archetype. You can find instructions on how to generate a project from scratch using Adobe’s AEM Project Archetype in the section Generate This Project.
If you already use the Adobe SaaS archetype, clone https://github.com/aem-design/aemdesign-project-services.git
and can copy relevant files from the project to yours. Here is a list of files you want to copy and merge where into an existing project:
- .env (environment variables auto loaded by functions.ps1)
- .gitignore (git ignore file)
- deploy-apps.ps1 (only deplopy apps module)
- deploy.all.ps1 (deploy all packages as single package to author and publish)
- deploy.frontend.ps1 (only deploy apps and frontend module)
- deploy.ps1 (deploy script)
- dev-token.json (json file with your Adobe IMS token)
- docker-compose.yaml (docker compose file)
- functions.ps1 (helper functions)
- install_packages.ps1 (auto install packages to author and publish using package.ps1)
- local-token.txt (text file with admin:admin in base64)
- package-install.txt (list of packages to auto install)
- package.ps1 (papckage install script)
- push-develop.ps1 (pushes to adobe saas remote develop branch)
- README.md (additional information about the project)
- reset.ps1 (resets the project by deleting all volumes and containers)
- start.ps1 (starts the doscker stack)
Before we get started, let’s take a look at the new docker-compose file you must have on your AEM SaaS project. If you are not doing this, then you are missing out on a lot of love!!! ❤️❤️❤️
So far, we have these core services:
author
- author instance available on port 4502, https://author.localhost.publish
- publish instance available on port 4503, https://publish.localhost.dispatcher
- dispatcher instance available on port 8081, https://dispatcher.localhost.traefik
- traefik dashboard, https://traefik.localhost/dashboard.createcert
- creates the certificates for the Traefik instance; this provides all of the SSL certificates for the other containers.This stack creates base author, publish, and dispatcher services that we can use to deploy our AEM SaaS project. In addition, traefik
and createcert
services are used to provide domain routing and SSL certificates for the stack.
Here is the relevant config for these services; you can see the completed docker-compose file here aemdesign-project-services.
version: "3.9"
services:
##########################################################
# AUTHOR START
##########################################################
# update query limit http://localhost:4502/system/console/jmx/org.apache.jackrabbit.oak%3Aname%3Dsettings%2Ctype%3DQueryEngineSettings
author:
image: ${AUTHOR_IMAGE}
hostname: author
restart: unless-stopped
healthcheck:
test: curl -u admin:admin --header Referer:localhost --silent --connect-timeout 5 --max-time 5 http://localhost:8080/system/console/bundles.json | grep -q \"state\":\"Installed\" && exit 1 || exit 0
interval: 10s
timeout: 10s
retries: 20
start_period: 1s
ports:
- ${AUTHOR_PORT}:8080
- ${AUTHOR_DEBUG_PORT}:58242
environment:
- TZ
- AEM_RUNMODE=-Dsling.run.modes=author,crx3,crx3tar,dev,dynamicmedia_scene7,nosamplecontent
- AEM_JVM_OPTS=-server -Xms248m -Xmx4524m -XX:MaxDirectMemorySize=256M -XX:+CMSClassUnloadingEnabled -Djava.awt.headless=true -Dorg.apache.felix.http.host=0.0.0.0 -Xdebug -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:58242
- AEM_PROXY_HOST=proxy
volumes:
- author-data:/aem/crx-quickstart/repository
depends_on:
- traefik
labels:
# note that you want this frontened to match the last. otherwise it will match login.${HOST_DOMAIN}"
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.author.rule: "Host(`${AUTHOR_HOST}`)"
traefik.http.routers.author.entrypoints: web
traefik.http.routers.author_https.rule: "Host(`${AUTHOR_HOST}`)"
traefik.http.routers.author_https.tls: true
traefik.http.routers.author_https.entrypoints: websecure
traefik.http.services.author.loadbalancer.server.port: 8080
traefik.http.services.author.loadbalancer.passHostHeader: true
networks:
- mongo-network
- author-network
- publish-network
- dispatcher-network
- internal
- default
##########################################################
# AUTHOR END
##########################################################
##########################################################
# PUBLISH START
##########################################################
publish:
image: ${PUBLISH_IMAGE}
hostname: publish
restart: unless-stopped
healthcheck:
test: curl -u admin:admin --header Referer:localhost --silent --connect-timeout 5 --max-time 5 http://localhost:8080/system/console/bundles.json | grep -q \"state\":\"Installed\" && exit 1 || exit 0
interval: 10s
timeout: 10s
retries: 20
start_period: 30s
ports:
- ${PUBLISH_PORT}:8080
- ${PUBLISH_DEBUG_PORT}:58242
environment:
- TZ
- AEM_RUNMODE=-Dsling.run.modes=publish,crx3,crx3tar,dev,dynamicmedia_scene7,nosamplecontent
- AEM_JVM_OPTS=-server -Xms248m -Xmx1524m -XX:MaxDirectMemorySize=256M -XX:+CMSClassUnloadingEnabled -Djava.awt.headless=true -Dorg.apache.felix.http.host=0.0.0.0 -Xdebug -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:58242
- AEM_PROXY_HOST=proxy
labels:
# note that you want this frontend to match the last. otherwise, it will match login.${HOST_DOMAIN}"
traefik.frontend.priority: 2
traefik.enable: true
traefik.http.routers.publish.rule: "Host(`${PUBLISH_HOST}`)"
traefik.http.routers.publish.entrypoints: web
traefik.http.routers.publish_https.rule: "Host(`${PUBLISH_HOST}`)"
traefik.http.routers.publish_https.tls: true
traefik.http.routers.publish_https.entrypoints: websecure
traefik.http.services.publish.loadbalancer.server.port: 8080
traefik.http.services.publish.loadbalancer.passHostHeader: true
volumes:
- publish-data:/aem/crx-quickstart/repository
networks:
- publish-network
- internal
- default
##########################################################
# PUBLISH END
##########################################################
##########################################################
# DISPATCHER START
##########################################################
dispatcher:
image: ${DISPATCHER_IMAGE}
hostname: dispatcher
restart: unless-stopped
ports:
- ${DISPATCHER_PORT}:80
environment:
- TZ
- AEM_PORT=8080
- AEM_HOST=publish
- DISP_LOG_LEVEL=trace1 #debug
- ENVIRONMENT_TYPE=LOCAL
- AEM_PROXY_HOST=proxy
labels:
# note that you want this frontend to match the last. otherwise, it will match login.${HOST_DOMAIN}"
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.dispatcher.rule: "HostRegexp(`${DISPATCHER_HOST}`, `{subdomain:[a-z]+}.${DISPATCHER_HOST}`)"
traefik.http.routers.dispatcher.entrypoints: web
traefik.http.routers.dispatcher_https.rule: "HostRegexp(`${DISPATCHER_HOST}`, `{subdomain:[a-z]+}.${DISPATCHER_HOST}`)"
traefik.http.routers.dispatcher_https.tls: true
traefik.http.routers.dispatcher_https.entrypoints: websecure
traefik.http.services.dispatcher.loadbalancer.passHostHeader: true
volumes:
- ./dispatcher/src/:/mnt/dev/src/
- ./dispatcher/scripts/fix-symlinks.sh:/docker_entrypoint.d/zzz-fix-symlinks.sh
depends_on:
- proxy
networks:
- publish-network
- dispatcher-network
- internal
- default
##########################################################
# DISPATCHER END
##########################################################
##########################################################
# TRAEFIK START
##########################################################
traefik:
image: ${TRAEFIK_IMAGE}
restart: always
hostname: traefik
environment:
- TZ
security_opt:
- no-new-privileges:true
command:
- "--log.level=${TRAEFIK_LOG_LEVEL}"
- "--accesslog=${TRAEFIK_ACCESS_LOG}"
- "--api.insecure=true" # Don't do that in production!
- "--api.dashboard=true" # Don't do that in production!
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--global.sendAnonymousUsage=true"
# Entrypoints for HTTP, HTTPS, and NX (TCP + UDP)
- "--entrypoints.web.address=:${TRAEFIK_PORT_HTTP}"
- "--entrypoints.websecure.address=:${TRAEFIK_PORT_HTTPS}"
# - "--entrypoints.mongo.address=:${MONGO_PORT}"
# - "--entrypoints.traefik.address=:${TRAEFIK_PORT_DASHBOARD}"
# - "--entrypoints.web.http.redirections.entryPoint.to=websecure"
# - "--entrypoints.web.http.redirections.entryPoint.permanent=true"
# Manual keys
- "--providers.file.directory=/etc/traefik/dynamic_conf"
- "--providers.file.watch=true"
labels:
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.traefikdashboard.rule: "Host(`${TRAEFIK_HOST}`) && ( PathPrefix(`/api`) || PathPrefix(`/dashboard`) )"
traefik.http.routers.traefikdashboard.entrypoints: web
traefik.http.routers.traefikdashboard.service: api@internal
traefik.http.routers.traefikdashboard_https.rule: "Host(`${TRAEFIK_HOST}`) && ( PathPrefix(`/api`) || PathPrefix(`/dashboard`) )"
traefik.http.routers.traefikdashboard_https.entrypoints: websecure
traefik.http.routers.traefikdashboard_https.tls: true
traefik.http.routers.traefikdashboard_https.service: api@internal
traefik.http.services.traefikdashboard.loadbalancer.server.port: 8080
ports:
- ${TRAEFIK_PORT_HTTP}:80
- ${TRAEFIK_PORT_HTTPS}:443
- ${TRAEFIK_PORT_DASHBOARD}:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
# Persist certificates, so we can restart as often as needed
- ./services/traefik/certs:/letsencrypt
- ./services/traefik/config/config.yml:/etc/traefik/dynamic_conf/conf.yml:ro
depends_on:
createcert:
condition: service_completed_successfully
networks:
- author-network
- publish-network
- dispatcher-network
- internal
- default
createcert:
image: ${CERTS_IMAGE}
environment:
- TZ
command:
- "${CERTS_COMMAND}"
volumes:
- ./services/traefik/certs:/certs
##########################################################
# TRAEFIK END
##########################################################
networks:
default:
internal:
author-network:
publish-network:
dispatcher-network:
mongo-network:
volumes:
author-data:
authormongo-data:
publish-data:
dispatcher-data:
And as you can see by now, we have many services with links and configuration options that provide a lot of value. These services and their configuration also mean there is much more to remember and teach new team members. To make this easier, we can add a few more benefits to take your AEM SaaS project to the next level. Your head is already spinning, but don’t worry; it will spin more from excitement once you see the final outcome! 😎
First, let’s add the missing proxy service that will allow us to further make our Docker setup a Swiss army knife!
Now, this will allow you to set up those pesky reverse proxy rules that we all love! But this also will allow you to test all those OSGI configs $[env:AEM_PROXY_HOST;default=proxy.tunnel]
that you have been setting up in your AEM projects. This will allow you to test them locally, as there isn’t any official way to do this otherwise. You will find AEM_PROXY_HOST=proxy
added to the environment
config for Author and Publish services.
Now that we have all these glorious core services, the obvious question is how to document all these links and make them relevant to each service and useful to each person using this. Well, the answer is simple, we add a custom Dashboard generator to do this for us. Let’s add dashboard services to our docker-compose file.
##########################################################
# DASHBOARD START
##########################################################
dashboardbuild:
image: ${DASHBOARD_BUILD_IMAGE}
privileged: true
environment:
- TZ
- JEKYLL_ENV=production
- DOMAIN_URL
- GIT_REPO
- GIT_REPO_ADOBE
- GIT_REPO_ICON
- GIT_REPO_TITLE
- GIT_REPO_ADOBE_ICON
- GIT_REPO_ADOBE_TITLE
- TRAEFIK_URL
- TRAEFIK_PORT_HTTP
- TRAEFIK_PORT_HTTPS
- TRAEFIK_PORT_DASHBOARD
- PROXY_URL
- MONGOUI_URL
- AUTHOR_URL
- AUTHOR_PORT
- AUTHOR_DEBUG_PORT
- PUBLISH_URL
- PUBLISH_PORT
- PUBLISH_DEBUG_PORT
- DISPATCHER_URL
- DASHBOARD_URL
- DISPATCHER_HOST
- PAGE_LINKS
- SHOWCASE_LINKS
- AUTHOR_LINKS
- CONSOLE_LINKS
command: bash /srv/jekyll/build.sh
volumes:
- ${DASHBOARD_CONTENT_PATH}:/srv/jekyll:rw
dashboard:
image: ${DASHBOARD_IMAGE}
restart: unless-stopped
working_dir: /content
hostname: dashboard
depends_on:
- traefik
- dashboardbuild
labels:
traefik.frontend.priority: 1
traefik.enable: true
traefik.http.routers.dashboard.rule: "Host(`${DASHBOARD_HOST}`)"
traefik.http.routers.dashboard.entrypoints: web
traefik.http.routers.dashboard_https.rule: "Host(`${DASHBOARD_HOST}`)"
traefik.http.routers.dashboard_https.tls: true
traefik.http.routers.dashboard_https.entrypoints: websecure
traefik.http.services.dashboard.loadbalancer.server.port: 80
traefik.http.services.dashboard.loadbalancer.passHostHeader: true
volumes:
- ${DASHBOARD_CONTENT_PATH}/_site:/content
- ${DASHBOARD_CONFIG_FILE}:/etc/nginx/nginx.conf
environment:
- TZ
networks:
- internal
##########################################################
# DASHBOARD END
##########################################################
The service dashboardbuild
is used to build Jekyll content and dashboard
service is used to serve the content.
This service will be responsible for generating a dashboard html and assets so that we can serve it from a simple Nginx server. This service will use a Jekyll template to generate the dashboard. This template is located in the ./services/dashboard/content
folder. This service uses environment variables to ensure minimal duplication and hardcoding is being added to the dashboard. Environment variables will also allow you to reuse your dashboard config on other projects. If you stick to the default values, you will not need to change anything. But if you do need to change anything, you can do so by updating the .env file at the root of your project.
As you can see, dashboardbuild
service heavily uses your environment variables to configure all of the services. This is done to make it easy to configure and use. Let’s look at the environment variables that are used to configure the dashboard.
The following configurations should be updated for your project. These configurations are used on the dashboard page to display the project information.
You can get all this information from the Adobe Developer or Adobe Experience consoles.
This configuration is used to construct project URLs in the Console Config section.
ADOBE_PROGRAM_ID="99999"
ADOBE_PROGRAM_REGION_ID="99999"
ADOBE_PROGRAM_ENVIRONMENT_PROD_ID="999991"
ADOBE_PROGRAM_ENVIRONMENT_STAGE_ID="999992"
ADOBE_PROGRAM_ENVIRONMENT_DEV_ID="999993"
ADOBE_PROGRAM_NAME="aemdesign"
ADOBE_PROGRAM_LOCATION="AEMDESIGN-p${ADOBE_PROGRAM_ID}-${ADOBE_PROGRAM_REGION_ID}"
ADOBE_PROGRAM_TITLE="AEM.Design"
ADOBE_PROGRAM_DESCRIPTION="AEM.Design"
ADOBE_PROGRAM_URL="https://aem.design"
These are the console links for the project; they are displayed on the dashboard page and will enable you to quickly navigate to the console pages for the project.
# Console Config
ADOBE_CONSOLE_EXPERIENCE_URL="https://experience.adobe.com/#/@${ADOBE_PROGRAM_NAME}/cloud-manager/environments.html/program/${ADOBE_PROGRAM_ID}"
ADOBE_CONSOLE_EXPERIENCE_URL_ICON="fab fa-adobe"
ADOBE_CONSOLE_EXPERIENCE_URL_TITLE="Cloud Manager"
ADOBE_CONSOLE_DEVELOPER_URL="https://developer.adobe.com/console/home"
ADOBE_CONSOLE_DEVELOPER_URL_ICON="fab fa-adobe"
ADOBE_CONSOLE_DEVELOPER_URL_TITLE="Developer Console"
ADOBE_CONSOLE_ADMIN_URL="https://adminconsole.adobe.com/"
ADOBE_CONSOLE_ADMIN_URL_ICON="fab fa-adobe"
ADOBE_CONSOLE_ADMIN_URL_TITLE="Admin Console"
# format: <URL>|<TITLE>|<ICON>
CONSOLE_LINKS="${ADOBE_CONSOLE_EXPERIENCE_URL}|${ADOBE_CONSOLE_EXPERIENCE_URL_TITLE}|${ADOBE_CONSOLE_EXPERIENCE_URL_ICON},${ADOBE_CONSOLE_DEVELOPER_URL}|${ADOBE_CONSOLE_DEVELOPER_URL_TITLE}|${ADOBE_CONSOLE_DEVELOPER_URL_ICON},${ADOBE_CONSOLE_ADMIN_URL}|${ADOBE_CONSOLE_ADMIN_URL_TITLE}|${ADOBE_CONSOLE_ADMIN_URL_ICON}"
When developing or testing the project, you typically want to navigate to the home page or showcase page. This allows you to configure any commonly used quick links. Update PAGE_LINKS
with a link you want to use on a regular basis.
Typically you would want to add links to each homepage for every site you will be working on. Furthermore, you should have a sister showcase site for each of your live sites; updating SHOWCASE_LINKS
will add an additional row of links matching your PAGE_LINKS
links.
These links are displayed on the dashboard page.
# format: <URL>|<TITLE>|<ICON>|<DISPATCHER SUBDOMAIN>
PAGE_LINKS="/content/aemdesign/home.html|AEM.Design - Home|fa fa-globe|aemdesign"
SHOWCASE_LINKS="/content/aemdesign-showcase.html/|AEM.Design - Showcase|fa-globe|aemdesign"
These are the environment links for the project, and they are displayed on the dashboard page, which will enable you to quickly navigate to the environment pages for the project. By default, this is configured to basic PROD, STAGE, and DEV environments. Update this to match all of your provisioned environments.
ADOBE_PROGRAM_ENVIRONMENT_PROD_URL="https://author-p${ADOBE_PROGRAM_ID}-e${ADOBE_PROGRAM_ENVIRONMENT_PROD_ID}.adobeaemcloud.com/"
ADOBE_PROGRAM_ENVIRONMENT_PROD_TITLE="Prod"
ADOBE_PROGRAM_ENVIRONMENT_PROD_ICON="fa fa-globe"
ADOBE_PROGRAM_ENVIRONMENT_STAGE_URL="https://author-p${ADOBE_PROGRAM_ID}-e${ADOBE_PROGRAM_ENVIRONMENT_STAGE_ID}.adobeaemcloud.com/"
ADOBE_PROGRAM_ENVIRONMENT_STAGE_TITLE="Stage"
ADOBE_PROGRAM_ENVIRONMENT_STAGE_ICON="fa fa-globe"
ADOBE_PROGRAM_ENVIRONMENT_DEV_URL="https://author-p${ADOBE_PROGRAM_ID}-e${ADOBE_PROGRAM_ENVIRONMENT_DEV_ID}.adobeaemcloud.com/"
ADOBE_PROGRAM_ENVIRONMENT_DEV_TITLE="Dev"
ADOBE_PROGRAM_ENVIRONMENT_DEV_ICON="fa fa-globe"
# format: <URL>|<TITLE>|<ICON>
AUTHOR_LINKS="${ADOBE_PROGRAM_ENVIRONMENT_PROD_URL}|${ADOBE_PROGRAM_ENVIRONMENT_PROD_TITLE}|${ADOBE_PROGRAM_ENVIRONMENT_PROD_ICON},${ADOBE_PROGRAM_ENVIRONMENT_STAGE_URL}|${ADOBE_PROGRAM_ENVIRONMENT_STAGE_TITLE}|${ADOBE_PROGRAM_ENVIRONMENT_STAGE_ICON},${ADOBE_PROGRAM_ENVIRONMENT_DEV_URL}|${ADOBE_PROGRAM_ENVIRONMENT_DEV_TITLE}|${ADOBE_PROGRAM_ENVIRONMENT_DEV_ICON}"
This configuration is used for Git links.
# Git Config
#GIT_REPO_AUTH="<username>:<password>@" # set this in your terminal
GIT_REPO_AUTH=""
GIT_REPO="https://${GIT_REPO_AUTH}github.com/aem-design/aemdesign-project-services.git"
GIT_REPO_ICON="fa-github" #fa-github,fa-bitbucket
GIT_REPO_TITLE="Github"
#GIT_REPO_ADOBE_AUTH="<username>:<password>@" # set this in your terminal
GIT_REPO_ADOBE_AUTH=""
GIT_REPO_ADOBE="https://${GIT_REPO_ADOBE_AUTH}git.cloudmanager.adobe.com/${ADOBE_PROGRAM_NAME}/${ADOBE_PROGRAM_LOCATION}/ "
GIT_REPO_ADOBE_ICON="fa-adobe"
GIT_REPO_ADOBE_TITLE="Adobe Git"
Now that we can generate our dashboard page, we can serve it using a simple Nginx server. This service will use a ./services/dashboard/config/nginx.conf
file to configure the Nginx server and load the contents from ./services/dashboard/content/_site
folder. This service will be available at https://dashboard.localhost
if you do not change the default values.
To give you an idea of what this will look like, here is a screenshot of the dashboard page.
As you can see, this is a simple page that will allow you to navigate to all the services that we have set up. This will also allow you to add additional links as needed and have a single place to manage and share them with your team or remind yourself in not so distant future. To update this dashboard page, you will need to update the ./services/dashboard/content/index.md
file. This file is a markdown file, it’s used by Jekyll to generate the resulting HTML page that will be served by Nginx on https://dashboad.localhost.
This guide is not focused on how to use the Jekyll site generator works, as you should be able to tap into its documentation site for further help. This site is also built using Jekyll, so you can check what is possible using Jekyll in its repo https://github.com/aem-design/aem.design.
You will also find that this dashboard pattern is implemented in this repo. Here is a screenshot of the dashboard page for this repo.
So you can copy and apply this pattern to your other projects as well.
I hope you will find this guide useful and will be able to apply it to your projects. This method persists all knowledge within the project repo with an elegant presentation of an alternative reality of a wild west of bookmarks and hidden confluence pages.
If you want to test out this setup for yourself, you can clone aemdesign-project-services repository and run the ./start.ps1
command to see it in action. This will start all the services and open the dashboard page at https://dashboard.localhost
.
I hope you enjoyed this guide. If you have any questions or comments, feel free to contact me. I will be happy to help.
Let me know what you think and don’t forget to tell your friends.
]]>It is always a pleasure to be part of something new and upcoming. So when I was invited to be part of an exclusive preview of a soon-to-be-released open source WebSight CMS, I could not resist.
I’ve been keeping an eye on new entrants to the Sling bases CMS for over a decade, and finally, WebSight CMS has entered the space and bringing new capabilities to the table. Big shout out to the WebSight team and Michał Cukierman for working on this for several years and bringing an open-source offering to the community. You can check out the GitHub repo for the community version at https://github.com/websight-io/starter.
These days the standard capabilities of CMS platforms are almost all but become feature complete. But there is still room for the cleanliness of experience and simplicity to be attained for the average author. This is where WebSight is making its play. Instead of going out and trying to reinvent how a great Enterprise CMS platform should not be built, it leverages the proven pattern that allows the team at WebSight to focus on Author experience and enterprise features that make a difference.
WebSight CMS has hit the ground running by leveraging the same stack as Adobe AEM, Apache Sling CMS and Peregrine CMS, to name a few. Leveraging technology is one thing, but WebSigh CMS’s team has the contenders covered in terms of author user experience.
WebSigh CMS covers the following pillars Sling CMS users expect: Spaces - for managing sites Pages - author site hierarchy and edit pages Assets - manage assets
Additionally, there are several administrative functions that allow management of Content, Users and Groups, and other admin tools that allow performing administrative tasks to meet many needs: Packages - manage packages (Package manager for AEM users) Resource Browser (CRX/DE for AEM users) User Management Groovy Console - great OOTB addition allowing you to do just about anything in the backend. Swagger Browser - amazing to see this as OOTB as it greatly streamlines any UI extension development, where you need to guess and reverse engineer this in other platforms.
Undercovers WebSight CMS runs on Apache Sling stack but uses Mongo OOTB and a welcome NGINX replacement for Apache/Dispatcher combo. It’s fantastic to see a native docker setup; docker has been a must for any project for the last ten years, and it’s a great step towards adoption as it removes all barriers to getting started.
As a deep AEM user, I deeply respect the inspiration from Adobe AEM experience with the number of authoring features that are a must for my authoring needs.
In addition to inspiration, the team behind WebSight has added subtle improvements to the authoring experience. A nice touch to add grouping to the list of components breaks up what is usually a monotone list. Separating the components list into Layout and Components (I would have called this Content) is a great separation of concern and provides additional grouping for authoring logic.
What’s missing in the open source preview version as of my review, and I am sure it will make it here eventually? This is more of a wishlist that would further round out the feature set:
These and everything else can be built on this already great package.
WebSight CMS is a great new entrant with many opportunities; it’s great to see how Sling CMS can be leveraged for a great effect! So feel free to drop a star on their repos WebSight CMS and shout it out on your favourite socials. Until next time!
]]>Let’s now look at some base config you can start with when setting up you dev box for dev! Let us focus only on the base windows setup, which can be used as a foundation for any tooling specialisation.
If you are a developer, a frontend dev, backend dev, full stack or just want to know what should I have no my windows dev box as a starting point, then this article is for you.
For the best experience with Windows, before you begin experimenting and installing all sorts of tools, you are going to need some basics.
First off, you will need some proper tools to use on Windows for development. These should be at the top of your list.
If you cannot use Docker Desktop, you can just setup Docker in WSL and run it using your impressive Windows Terminal!
Once you have these tools set up, you should be able to contribute to most code projects.
Before you run off and get busy with code, you need to verify and do some windows config updates, these are not so obvious, but you will encounter these at some point.
Then you going to have to enable Windows Long file names.
reg add "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem" /v LongPathsEnabled /t REG_DWORD /d 0x1 /f
reg add "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Policies" /v LongPathsEnabled /t REG_DWORD /d 0x1 /f
reg query "HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\FileSystem"
reg query "HKEY_LOCAL_MACHINE\System\CurrentControlSet\Policies"
Now you are going to have to tell Git to use long file names, just in case it won’t do it by default.
git config --system core.longpaths true
Now you can happily clone Linux/Unix repos that have very long paths!
To get the best out of your shell, you need to set up some environment variables that will give you access to
sysdm.cpl
as Administrator and on the Advanced tab, click Environment Variables
.JAVA_HOME
point to C:\Program Files\Java\jdk1.8.0_301
.M2_HOME
point to C:\software\apache-maven-3.8.4
(where you installed maven)System Variables
select Path
then click Edit
System Variables
click New
and add the following:
%JAVA_HOME%\bin
this will allow you to run java%M2_HOME%\bin
this will allow you to run mavenC:\Program Files\Git\usr\bin
this will allow you to run various Linux commands, sed
, awk
, grep
etcOk
to all the remaining dialogue boxes to save all the changesjava -version
- you should see a Java JDK version if all went wellAfter this, you should have the following paths in your command line path:
%M2_HOME%\bin
%JAVA_HOME%\bin
C:\Program Files\Git\usr\bin
Create .wslconfig
in ~/
with the following content.
[wsl2]
memory=6GB
This will ensure that WSL will not use up all of your resources. See more on Advanced settings configuration in WSL.
Once you reboot, your WSL should behave, and you will be able to use it with nice tools! Let me know what else you think should be on the list!
I hope you enjoyed this guide. If you have any questions or comments, feel free to contact me. I will be happy to help.
Let me know what you think and don’t forget to tell your friends.
]]>Additionally, when using Docker Compose, you will understand whether your deployment architecture is complex and which parts are critical; this will provide you with some feedback to clean things up.
Docker Compose requires you to have one docker-compose.yml
file in your project root. This file is a YAML file that describes the deployment of your stack. In simple scenarios, you can just have one file, and it will be enough; for most scenarios having a structure set up for growth is going to be beneficial long term. When a developer clones your repo all they need to do is to run docker compose up
, and the stack will be deployed. Once it’s running, they open their browser and go to http://localhost
, and they will land on stack console:
This simple console can be updated in any way required to convey context and provide any relevant information and links to developers working on the project. This is a straightforward way to surface a lot of documentation in a usable manner.
Let’s take a deeper look at the details; here is an example of a simple deployment you can see the source here docker-compose.yml. This docker compose file has the following services:
These services can be activated all at once or individually, you can run docker compose up traekfik nginx
to get only the developer console on http://localhost
. Some of the services have dependencies on others, so you can see the order in which they are activated. Some services are optional and can be activated when required using profile activation docker compose up -p dodeploy author-deploy-core
.
This example I’ve provided uses the older 2.4
version of docker compose that supports the EXTENDS
keyword:
extends:
file: ./docker/aem/docker-compose.yml
service: author-deploy-support
This EXTENDS
keyword allows you to pull together predefined docker compose files from a better-organised structure.
Having a well-organised docker folder with a subfolder for all services will ensure that you can have a single file that can be used to deploy all of your services. This also allows you to have all of the relevant per service configurations isolated and not mixed in the same file. Here is an example structure of a docker folder, source is here:
docker
├── aem
│ └── docker-compose.yml
├── common
│ ├── config-tz.yml
├── nginx
│ └── docker-compose.yml
├── selenium
│ └── docker-compose.yml
├── testing
│ ├── docker-compose.yml
└── traefik
└── docker-compose.yml
If you check out the github link you will find a number of relevant config files in each service folder. Furthermore, as you add new services, it will be easy to follow this pattern to keep things neat and tidy.
In later versions of docker compose, see compose-file-v3, you can’t use EXTENDS
keyword and you need to pass all of the required service docker-compose.yml
files when running docker compose up
command. This can be simply abstracted into a start script here is an example of how to do it start.ps1. This is a PowerShell script that will run docker compose up
command for all services activated from start-services.conf
file. In addition, that script leverages .env
file to keep all of your environment variables in one place for easy maintenance, see example .env file. You can read more about environment files on official docs.
I hope you will find using docker compose will being a smile to your face {insert home alone 2 smile}. As always if you have any questions or comments, feel free to contact me. I will be happy to help.
Let me know what you think and don’t forget to tell your friends.
]]>What does this mean for AEM? Well, it means all of the containers essentially will need to be updated to use something else. You can’t use Centos, so the next best thing would be … Debian. Yes, Ubuntu is a good contender, but you won’t be able to run AEM Forms on it; tldr, Ubuntu does not have 32bit support.
Old containers will keep working, but slowly, all new updates would need to be rolled on the new Debian images that are available right now. Go ahead check them to see what you think. You can also check out the AEM.Design Docker Hub for the latest images.
Here, for example, the latest AEM 6.5 with SP 11:
docker run --name author6511 -e "TZ=Australia/Sydney" -e "AEM_RUNMODE=-Dsling.run.modes=author,crx3,crx3tar,forms,localdev" -e "AEM_JVM_OPTS=-server -Xms248m -Xmx1524m -XX:MaxDirectMemorySize=256M -XX:+CMSClassUnloadingEnabled -Djava.awt.headless=true -Dorg.apache.felix.http.host=0.0.0.0 -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=58242,suspend=n -XX:+UseParallelGC --add-opens=java.desktop/com.sun.imageio.plugins.jpeg=ALL-UNNAMED --add-opens=java.base/sun.net.www.protocol.jrt=ALL-UNNAMED --add-opens=java.naming/javax.naming.spi=ALL-UNNAMED --add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/jdk.internal.loader=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED -Dnashorn.args=--no-deprecation-warning" -p4502:8080 -p30303:58242 -d aemdesign/aem:6.5.11.0-jdk11
And as an extra bonus, if you prefer to use this on your M1 Mac, here is the same command again with the ARM suffix in the tag:
docker run --name author6511 -e "TZ=Australia/Sydney" -e "AEM_RUNMODE=-Dsling.run.modes=author,crx3,crx3tar,forms,localdev" -e "AEM_JVM_OPTS=-server -Xms248m -Xmx1524m -XX:MaxDirectMemorySize=256M -XX:+CMSClassUnloadingEnabled -Djava.awt.headless=true -Dorg.apache.felix.http.host=0.0.0.0 -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=58242,suspend=n -XX:+UseParallelGC --add-opens=java.desktop/com.sun.imageio.plugins.jpeg=ALL-UNNAMED --add-opens=java.base/sun.net.www.protocol.jrt=ALL-UNNAMED --add-opens=java.naming/javax.naming.spi=ALL-UNNAMED --add-opens=java.xml/com.sun.org.apache.xerces.internal.dom=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/jdk.internal.loader=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED -Dnashorn.args=--no-deprecation-warning" -p4502:8080 -p30303:58242 -d aemdesign/aem:6.5.11.0-jdk11-arm
I hope you will find the new ARM images useful. If you have any questions or comments, feel free to contact me. I will be happy to help.
Let me know what you think and don’t forget to tell your friends.
]]>But as you all know, docker desktop WAS a great tool for running docker on your desktop, up until it became a thing of the past. It drove itself off the cliff with paid subscription. Most corporate companies will think long and hard before purchasing docker desktop licences. So if you can’t use it at work, why would you use it at home? After all, you are what you practice, and you are what you use. Consistency of tools is critical to developers.
There are many alternatives to the docker desktop stack, but let’s not throw docker simplicity out the window yet. Obviously, Kube and Helm is the destination, but let’s take small steps. For DevOps, using git with docker-compose gives you all the power of git and docker. Yes team behind Docker Desktop have added a lot of front end features to it, and maybe there is a use case for them, but in a pipeline driven world, you can’t use them. You don’t use docker desktop in production so keeping rest of the stack the same would be a good idea.
So this brings this journey to a crossroads. Do you build a VM and run the docker engine and docker-compose in there, or do you run this semi-natively? If you are on Linux/Unix you are alright; you can google your way out. On windows, however best experience would be attained through Powershell Core7, WSL2 and Windows Terminal. Go ahead try these on; you will never look back! Also, while you are at it, stop using CYGWIN to do this; you are making your life harder than you have to. :P
Now that you have tools from the future installed, lets proceed to the next steps
This is a ubuntu guide, same as Docker Desktop, Centos ecosystem is dead, so using Ubuntu is the best option we have. (long story post to follow).
Open up Powershell core and run the following commands.
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
wsl.exe --set-default-version 2
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
wsl --install --distribution Ubuntu
wsl --install --distribution Debian
Now you have a ubuntu image in your WSL, you can restart Windows Terminal, and it will appear as a new option.
Because we did not use MS apx, the new Ubuntu has only a root user; best approach is to add a new user to your liking. Run the new ubuntu terminal in Windows Terminal, and in it run the following commands. (Change user name and password as you like when prompted.)
adduser -d /home/maxbarrass -m maxbarrass
passwd maxbarrass
addgroup maxbarrass sudo
usermod -aG sudo maxbarrass
echo echo "$USER ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/apt/sources.list
Now that you have a new user in your Ubuntu, you can update your Windows Terminal profile to use the new user.
This should be in the Command line for your ubuntu profile:
wsl.exe -d ubuntu -u maxbarrass
You can run this in your Ubuntu terminal:
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/aem-design/aem.design/master/assets/scripts/install-docker-wsl.sh)"
Or create a new script nano install-docker.sh
with the following content and run it sudo ./install-docker.sh ${USER}
. This will install docker and docker-compose, as well as add docker service start to your .profile
. This way, when you open your Ubuntu, it will ensure that docker is running.
#!/bin/bash
#allow your account to sudo without password
echo echo "$USER ALL=(ALL) NOPASSWD:ALL" | sudo tee -a /etc/sudoers
# update the package manager and install some prerequisites (all of these aren't technically required)
sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common libssl-dev libffi-dev git wget nano
# create a group named docker and add yourself to it
# so that we don't have to type sudo docker every time
# note you will need to logout and login before this takes affect (which we do later)
sudo groupadd docker
sudo usermod -aG docker $USER
# add Docker key and repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" -y
# add kubectl key and repo
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
# update the package manager with the new repos
sudo apt-get update
# upgrade the distro
sudo apt-get upgrade -y
sudo apt-get autoremove -y
# install docker
sudo apt-get install -y docker-ce containerd.io
# install kubectl
sudo apt-get install -y kubectl
# install latest version of docker compose
sudo curl -sSL https://github.com/docker/compose/releases/download/v`curl -s https://github.com/docker/compose/tags | grep "compose/releases/tag" | sed -r 's|.*([0-9]+\.[0-9]+\.[0-9]+).*|\1|p' | head -n 1`/docker-compose-`uname -s | tr '[:upper:]' '[:lower:]'`-`uname -m` -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
# ensure docker does not use iptabels
sudo touch /etc/docker/daemon.json
sudo tee -a /etc/docker/daemon.json <<EOF
{
"iptables": false
}
EOF
# auto start docker on boot
echo "Starting docker service"
echo "sudo service docker start" >> ~/.profile
# mount host drives to root /c/ etc.
sudo touch /etc/wsl.conf
sudo tee -a /etc/wsl.conf <<EOF
[automount]
root = /
options = "metadata"
EOF
Reboot, open windows terminal and open bash prompt. You should be prompted for password to start docker. After that you can run docker ps
to see if docker is running.
I hope you enjoyed this guide. If you have any questions or comments, feel free to contact me. I will be happy to help.
Let me know what you think and don’t forget to tell your friends.
]]>By far the easiest method of updating your AEM content programmatically is to use ACS On-Deploy Script
To do this you will need these java files
UpdateNodeAttibutes
, read more about this in the docs.Here is starting content for your files…
Location: design\aem\ondeploy\OnDeployScriptProviderImpl.java
package design.aem.ondeploy;
import com.adobe.acs.commons.ondeploy.OnDeployScriptProvider;
import com.adobe.acs.commons.ondeploy.scripts.OnDeployScript;
import design.aem.ondeploy.scripts.*;
import java.util.Arrays;
import java.util.List;
import org.apache.felix.scr.annotations.Properties;
import org.apache.felix.scr.annotations.Property;
import org.apache.felix.scr.annotations.Service;
import org.osgi.service.component.annotations.Component;
@Component(immediate = true)
@Service
@Properties({
@Property(name = "service.description", value = "Developer service that identifies code scripts to execute upon deployment")
})
public class OnDeployScriptProviderImpl implements OnDeployScriptProvider {
@Override
public List<OnDeployScript> getScripts() {
return Arrays.asList(
new UpdateNodeAttibutes()
);
}
}
Location: design\aem\ondeploy\scripts\UpdateNodeAttibutes.java
package design.aem.ondeploy.scripts;
import com.adobe.acs.commons.ondeploy.scripts.OnDeployScript;
import com.adobe.acs.commons.ondeploy.scripts.OnDeployScriptBase;
import com.day.cq.search.PredicateGroup;
import com.day.cq.search.Query;
import com.day.cq.search.QueryBuilder;
import com.day.cq.search.result.Hit;
import com.day.cq.search.result.SearchResult;
import com.day.cq.wcm.api.Page;
import java.util.HashMap;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
import javax.jcr.Node;
import javax.jcr.RepositoryException;
import javax.jcr.Session;
import org.apache.commons.lang3.StringUtils;
import org.apache.sling.api.resource.ModifiableValueMap;
import org.apache.sling.api.resource.Resource;
import org.apache.sling.api.resource.ResourceResolver;
public class UpdateNodeAttibutes extends OnDeployScriptBase {
private static final String CONTENT_ROOT_PATH = "/content/";
private static Map<String,String> RENAME_ATTRIBUTES = new HashMap<String, String>() ;
private static Map<String,String> ADD_ATTRIBUTES = new HashMap<String, String>() ;
@Override
protected void execute() throws Exception {
QueryBuilder queryBuilder = this.getResourceResolver().adaptTo(QueryBuilder.class);
// predicates for properties we are looking for.
// more info on how to make these https://github.com/paulrohrbeck/aem-links/blob/master/querybuilder_cheatsheet.md
HashMap<String, String> param = new HashMap<>();
param.put("path", CONTENT_ROOT_PATH);
param.put("p.limit", "-1");
long i = 1;
for(Map.Entry<String, String> entry : RENAME_ATTRIBUTES.entrySet()) {
String key = entry.getKey();
param.put("group." + i + "_property", "@" + key);
param.put("group." + i + "_property.value", "true");
param.put("group." + i + "_property.operation", "exists");
i++;
}
param.put("group.p.or", "true");
// this will return a list of all pages that have any of properties we need
Query query = queryBuilder.createQuery(
PredicateGroup.create(param),
this.getResourceResolver().adaptTo(Session.class)
);
SearchResult result = query.getResult();
boolean migrationError = false;
// walk the query result
for (final Hit hit : result.getHits()) {
Resource resultResource = hit.getResource();
if (resultResource != null) {
ModifiableValueMap resourceProps = resultResource.adaptTo(ModifiableValueMap.class);
try {
// walk though all nodes that need to be renamed
for(Map.Entry<String, String> entry : RENAME_ATTRIBUTES.entrySet()) {
String key = entry.getKey();
String newKey = entry.getValue();
// if node map contains attribute create a new entry.
if (resourceProps.containsKey(key)) {
// add value with new key
resourceProps.put(newKey,resourceProps.get(key));
// remove old key and value
resourceProps.remove(key);
}
}
for(Map.Entry<String, String> entry : ADD_ATTRIBUTES.entrySet()) {
String key = entry.getKey();
String value = entry.getValue();
// if node map contains attribute create a new entry.
if (resourceProps.containsKey(key)) {
// add new key and value
resourceProps.put(key,value);
}
}
} catch (Exception e) {
migrationError = true;
e.printStackTrace();
throw new RuntimeException("Could not complete migration.");
}
}
}
if (!migrationError) {
this.getSession().save();
}
}
}
Update RENAME_ATTRIBUTES
and ADD_ATTRIBUTES
maps for desired oucome, good luck and have fun!
If you would like to contribute or fork the code, you can get it on GitHub https://github.com/aem-design and through Maven central.
Don’t forget to tell your friends.
]]>Before we starte here is a context of each term
SPA - single page application an alternative to a multi-page website. SPA Editor - AEM native editor for SPA’s Headless - a pattern where you leverage API or GrapgQL to get data from server Widget - a component of a web page with clientside experience, it has basic HTML, and javascript turns into the interactive experience when loaded.
Now we can extrapolate these in relation to AEM. Here is a summary of patterns of implementing single-page and multi-page experience in AEM. In a lot of AEM implementations, you will find that all of these methods would have been utilised over time.
SPA - is a standalone application hosted externally to AEM, managed by developers, potential config by authors through content; can be headless. SPA in a Page - provides a method of host SPA application in a page, giving the ability to place SPA’s in different parts of the site; developer-focused, SPA has a dedicated component with possible authoring inputs; can be headless. SPA Editor - native ability to create SPA’s in AEM, allow full authoring of application; native components; can be headless. Page - a native way to create multi-page experiences, allow full authoring of pages, full content reuse and ability to use any components needed; a primary way of creating content for the web. Widgets in Page - small targeted components that are added to pages to create experiences; provides a way to create rich experiences that could be hard to author; can be headless.
Here is this information in a table.
Hosting | Page Experience | Focus | Content Usage | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Pattern | Headless | External | Internal | Single Page | Multi-Page | Authoring | Developer | API | Page | Tags | Assets |
SPA | x | x | x | x | x | ||||||
SPA in Page | x | x | x | x | x | ||||||
SPA Editor | x | x | x | x | x | x | x | x | |||
Page | x | x | x | x | x | x | |||||
Widget in Page | x | x | x | x | x | x | x | x | x | x |
Adding a Design Language System into the mix would also add another level of complexity as SPA, and native Web will have different Design Language primarily because native AEM components can have Styles applied to them by authors, and SPA would have a more focused and controlled set of settings.
Focussing on AEM Authoring experience will always be beneficial as it will allow more people to help to grow experience. Having more eyes and hands helping to build the experience is always a good thing.
If you are must have a developer-centric experience and use AEM as a content repository, you can do it but you will be missing out on a lot of benefits that you would need to invent from sratch.
Using AEM SPA Editor will allow opening your experience building to authors so that they can help in any way they can. If you just have developers doing the authoring, then you will also missout on valuable input, and chances are you will rever to the vanila SPA pattern.
Going down the Widgets in a Page will give you the biggest impact as you can leverage all of the content authoring patterns and content services available in AEM. This will allow you to integrate widgets into existing pages, allow you to target them easily and importantly, all of this will be done by authors.
Obviously, all other patterns can be developed to the same level of maturity given appropriate time and effort.
Here is the background information on building blocks that are used in SPA patterns.
There are many ways to make great experiences for the web. In the end HTML and CSS is what all experiences are made of and its the details of how the developers choose to implement those experiences that varries. Traditional methods of we dev are either plain HTML pages or Single Page Application’s (SPA’s).
Plain HTML approach means that you develop multi-page experiences where a user is navigating a network of related pages. It’s a long term play, and you leverage a lot of technical and taxonomy tools to help you to play this out over a long period of time.
Single Page Applications or SPA’s is another approach that focuses all of the experiences into one page. This means that user does not navigate pages as such, they are confined to one page, and they navigate experience as its laid-out by the SPA, SPA uses traditional API’s or GraphQL to gather content it needs.
AEM technical strength is in the flexibility of content, content architecture and ability to render content in place in different ways. You can store content in AEM in anyway you want, structure it to make logical sense and retrieve it either with native API, GraphQL or custom API you need.
In AEM, you can store some content into a location /content/page
then request its HTML representation /content/page.html
, then request its XML representation /content/page.xml
, then request its JSON representation /content/page.tidy.5.json
, then request its Image representation /content/page.thumbnail.png
and on and on.
AEM has a method for adding these renderers (html, json, xml, thubnail.png) that enable you to read content from anywhere and how you want it, essentially allowing the whole content repository to act as an API source, and you can read different bits of different content is many ways depending on your need.
AEM has a number of content experiences Content Fragments and Models, Experience Fragments, Tags and Sling API that allow you to get content from AEM. Additionally, ACS has several services that further complement OOTB functionality.
When it comes to the traditional API approach, the aim it funnels all of the calls into one area, one “service” that handles a request for that content. You typically should have an API gateway to ensure you do not flood your backend service and you would have a number API returning either atomic data or aggregated set of data.
Aggregation of API’s is a pattern for gathering data from different API’s and presenting it in one package to the consumer. This pattern can be implemented both on the client and server-side. GraphQL essentially provides a server-side and client-side aggregation in one; you can get atomic data and aggregate as well with one API.
GraphQL API approach allows you to get the same data as traditional API but potentially at the client-side. This obviously has a lot of perceived flexibility as the structure of API is moved up the stack and managed by the UI layer, and the same methods should be used to protect the backend.
Headless is a method of using AEM as a source of data, and the primary way of achieving this is by using API and GraphQL for getting data out of AEM. This pattern can be used in any SPA and Widget approach but does make AEM more developer-focused.
Widgets are a way of creating AEM authoring components that have rich client-side presentations. This pattern allows full authoring experiences and all of the API patterns to be used.
Please checkout the docker hub aemdesign/aem for latest AEM SDK images.
If you would like to contribute or fork the code, you can get it on GitHub https://github.com/aem-design and through Maven central.
Don’t forget to tell your friends.
]]>After the first hit to the page in aem and HTML response received for first time from publish instance will get cached on Akamai level.
Subsequent request to the same page will be served from the cached content in Akamai rather than hitting dispatcher / publisher.
Anytime did you struggled to get the latest HTML from publisher instead of Akamai cache?
Well we did!!!
We have worked on AEM replication agent for flushing Akamai cache whenever page gets published.
So, after author change a page content and publish it our Akamai cache flush agent configured on publisher environment will pick that page and request to akamai for clearing cache. So that user of that page will get the latest content instead of cached old content from Akamai.
It is all automated, we don’t need to clear the Akamai cache when new product goes live to prod.
We don’t ask DevOps to clear Akamai cache so customers will see latest page. Lets save their 1-2 min(s) time whenever new product launch.
If you think you don’t know these info’s or not sure from where you can get it? Ask devOps these information and say I got your back if you provide me these info 😍
After getting the above information from you, we are encrypting keys using AEM’s crypto support and storing it on AEM so you are safe with your secrets. While we use it for making the POST call from Transport Handler we will decrypt the keys and use it.
Also while making a POST call to Akamai servers we are using HMAC_SHA_256 to protect the data.
Make sure you are configuring the Akamai cache flush agent on each environment separately
By Assuming you have installed aem design code into your local AEM.
Setup Akamai flush agent on AEM author instance only if you have akamai setup on author level as well. Mostly we will have Akamai setup for publish env.
Always setup Akamai flush agent on publish level so as soon as the page reaches publish instance our Akamai flush agent will go ahead and clear Akamai Cache.
Go to miscadmin and open replication/agents.author for author instance & replication/agents.publish for publish instance
here I’m showing example of setting flush agent in author level same time you can set it in publisher as well.
Click new from tool bar and you will see the Create Page dialog.
Select “Akamai Publishing Replication” and give your replication agent a name & title. Click on create and open the newly created agent on the list. (it will be the last entry on the list) Click on Edit and provide the information required and click on Ok.
You should be able to see the Akamai Flush Agent is On (green) and it will look for any replication events.
Click on Test Connection link and make sure you have all the correct configurations. You should see “Replication test succeeded”
This component will save our time whenever we need to update the content in AEM pages.
Make sure you have set up the dispatcher flush agent as well so we can avoid content served from Dispatcher cache.
Feel free to reach out to us if you have any questions and don’t forget to tell your friends.