Picking up from last time, we’ve successfully prepared our instances in the proper way by utilizing Transact-SQL. Meaning, we were able to correctly map the service account that Tripwire uses to all the databases in a given instance. Additionally, we were able to grant the proper permissions to the account so that Tripwire will function correctly. Now we are ready to configure database specific rules to enable Tripwire to report on detected changes.
For brevity, I’ll assume that readers of this post are able to complete the agent installation process, as well as the database node creation in the Tripwire console. At a later time I might go into the nuances of securing agent/manager communication with certificates (something I highly recommend), which requires an existing PKI and specific parameters in an Active Directory Certificate Services template.
By default, the Tripwire MS SQL change audit rules only monitor the master, model, msdb, and tempdb databases. The default rules include specifiers (that are database specific) that monitor the functions, indexes, roles, stored procedures, tables, triggers, users, user defined types, and views of the database. To be clear, there is some nesting happening here. To monitor a given database, one rule is created. Specifiers are contained in rules, giving a one rule to many specifiers relationship.
Following this logic, we need to create a rule per database that contains the correct database name in the specifiers mentioned above. Normally, the way this is accomplished is by using the “duplicate rule” function in Tripwire, then changing the rule name description and specifier data to be correct for the database being added. This method still requires that every specifier be “touched” for every database rule. This is scalable for a few, maybe even a dozen databases, but quickly becomes infeasible as the number of databases to monitor increases.
Unfortunately, there is no bulk-edit option for creating rules. However, Tripwire has the ability to export/import rules as an XML file. The typical use for this feature is backing up and restoring previously created rules, not necessarily rule creation. Nevertheless, with some analysis, we can use the import rule feature to import an XML file that populates all the rules and specifiers we need for a given database node in Tripwire. We just need to know the required format of the XML file and a way to generate the content. Our goal is to use Python to generate an XML file.
To begin, we generate an export of a given database rule to study the format of the XML file. We notice that there are three main section of the XML content – the header, criteria, and what I’ll call the template. We keep the header the same, and define it with the variable name stmt_open.
Next, we create a variable for the criteria. Only a small edit needs to be made here. After testing, it was found that the value of the oid parameter could (and needs to) be removed prior to an import. The value will be auto-generated by Tripwire upon import.
The last piece of the export that we will leverage contains the syntax for defining rules and their corresponding specifiers. Assuming that we will use some sort of loop to generate the new XML content, we need to make sure that we numerically index the rules correctly, and that we populate the proper database name where needed.
The above shows how we use a “replace_me” string as a placeholder for the database name. We also force a rule ID number to be populated with a string. Note that the above only displays the placeholder for the Function specifier. Place holders need to be created for all desired specifiers.
Now that the variables for content generation are created, we can define the logic that the Python script will use to generate the XML file we need. The below details how the variables are used to create the XML file. Once Python generates the needed XML file, all that is left to do is import it into Tripwire, and bam, we just saved hours worth of clicking.
I hope that you are also able to use this approach to ensure all your databases are monitored. Incorporating these steps into a weekly/monthly maintenance processes would be a final step to continually ensure that all your databases on a given instance are being monitored. I’m sure that some of the readers of this post are already thinking about how to further automate these steps, and I encourage you to share your thoughts in a comment.