As a reader of this blog or a listener of the podcast, you know I am a user of both Linux and decoupled homes. Traditionally with a Linux PeopleSoft installation you need to source the delivered psconfig.sh
to set your environment variables. When an entire environment was contained under its own PS_HOME
, you could tweak this psconfig.sh
file if customizations were needed without fear of impacting other environments. Now with decoupled homes, the PS_HOME
directory will likely be shared, so changing the psconfig.sh
file located there is a bad idea.
When switching to decoupled homes, I was looking for a good way to manage sourcing the psconfig.sh
file and the different environment variables. While attending Alliance 2015, I saw a presentation given by Eric Bolinger from the University of Colorado. He was talking about their approach to decoupled homes and he had some really good ideas. The approach I currently use is mostly based on these ideas, with a few tweaks. The main difference is that he has a different Linux user account specific to each environment. With this approach, he is able to store the environment specific configuration file in the users home directory and source it at login time. This is similar to the approach Oracle suggests and uses with their PIs(see user psadm2). My organization didn’t go down the road of multiple users to run PeopleSoft. Instead, we have a single user that owns all the environments and we source our environment specific configuration file before we start psadmin
. We use a psadmin wrapper script to help with this sourcing(which I will discuss and share in a future post). The main thing to keep in mind is regardless of how these files are sourced, the same basic approach can still be used.
The idea here is to keep as much delivered and common configuration in psconfig.sh
as possible and keep environment specific customization in their own separate files. I like to keep these config files in a centralized location, that each server has access to via a NFS mount. I usually refer to this directory as $PSCONFIGS_DIR
. What I do is copy the delivered psconfig.sh
file to $PSCONFIGS_DIR
and rename it psconfig.common.sh
. I then remove any configurations that I know I will always want to set in our custom environment specific file, mainly PS_HOME
. I then add any needed configuration that I know will be common across all environments (Another approach would be to create a new psconfig.common.sh
file from scratch, set a few variables and then just source the delivered file cd $PS_HOME && . psconfig.sh
. Either way works, but I like the cloning approach). This common file will be called at the end of every environment specific file. Remember to take care when making any changes to this file, as it will impact any environment calling it. It is also a good idea to review this file when patching or upgrading your tools.
Next for the environment specific files, I create a new file called psconfig.[env].sh
. The environment name is listed in its filename. An example would be psconfig.fdev.sh
. You could really choose any name for this, but I found this approach to be handy. In this file you will set the environment specific variables as needed, then end with calling psconfig.common.sh
. Here is an example file:
This approach allows you to be a little more nimble when patching or upgrading. You can install new homes or middleware, then update the psconfig.[env].sh
file and build new domains. When you get to go-live for Production, you can have the domains all built ahead of time. When ready, just update the config file, upgrade the database, and you are good to go!
One final note, regarding directory naming conventions. My organization tends to always have our PS_CFG_HOME
directory match the environment or database name exactly, ie. fdev
. I’m considering changing this, however. During our last Tools patching project, I found it a little awkward to prebuild the domains and still end up with same directory name. It seems to make much more sense to include the PeopleTools version in the directory name. That way you can prebuild the domains in a new PS_CFG_HOME
, and when you go-live just blow the old home away. Another great idea I took away from Eric’s presentation was how to dynamically generate a PS_CFG_HOME
directory name:
export PS_CFG_HOME=/opt/pscfg/$ENV-`$PS_HOME/bin/psadmin -v | awk '{print $2}'`
If you use this technique, you will want this to be the last line in your config file – after sourcing the common file. What this does is concatenate your environment name with the PeopleTools version, using the psadmin
version command – ie. fdev-8.55.03
. This will give you more clarity on what tools version the domains under this PS_CFG_HOME
were built with and it will make it easier to prebuild your domains.
Pingback: #33 – Puppet and PeopleTools 8.55 – psadmin.io
Pingback: Multiple PS_CFG_HOMEs with the DPK
Hello Kyle, thanks for making me feel famous. Since I saw your article I thought I would reply and update what we’re doing with DPK/Puppet/8.55.
We extended the DPK idea to use that code on a puppet master so we’re doing our deployments entirely with puppet agent now. As of last weekend we have Portal, HCM and Finance in production using this method.
Because configuration management is now centralized, and per server overhead is greatly reduced, we are only running one PeopleSoft instance per virtual machine. With this setup we don’t need to delve into maintaining separate user environments for multiple users on a server.
What I have done is to maintain the differences between instances in Hiera data for puppet and modify the delivered template for .bashrc. Now when we deploy via puppet the user is created, its environment is set up and maintained in our .bashrc. We are no longer touching the delivered psconfig.sh or maintaining any additional text based common configurations.
If you are interested I am presenting on this setup at HEUG Alliance 2017 in February. Come say hi if you’re there!
-Eric Bolinger