Skip to end of metadata
Go to start of metadata

Performance Tuning

As detailed in http://www.agiloft.com/agiloft-scalability-and-redundancy.pdfAgiloft can support thousands of users and generate over 200,000 records per hour on an inexpensive server. The following tips will help you maximize system performance for your on-premise deployment of  Agiloft.

Test Basic System Performance

If you are using our hosted service, there is no need to run this test because all our servers exceed the basic performance requirements.

If you have  Agiloft installed on a server inside your firewall, you will have access to the admin console. This allows you to run a quick test to confirm that the basic setup will provide adequate performance before you spend time on optimization.

This test is entirely independent of your KB, it is designed to find issues with the hardware and system configuration. The test takes about ten minutes and will strongly impact performance while it is running, so it is best done outside of business hours. To run it, log into the Admin Console and select  Setup > Performance > Run Performance Test. You will get a result like this:

The Time taken to import KB number depends primarily on the I/O of the server, while the Time taken to generate 4,000 records is mostly dependent on the number and speed of CPU cores; and Time to render reports relates to disk speed for read operations. If the record import time is greater than 300 seconds or the time taken to generate records is higher than 200, or the time to render reports is larger than 500, you are likely to experience sub-optimal interactive performance. The most common reasons are:

  • The hardware does not meet the recommendations provided at: http://www.agiloft.com/system-requirements.htm
  • You are using a database on a remote machine that is slow or is connected to the  Agiloft server by a slow connection. If you need to use a remote database, we recommend using a 10G Ethernet or Infiniband connection. The best performance is achieved by using the local copy of MySQL recommended for download by the installer.
  • You are running in a virtualized environment and other virtual machines are using a lot of resources.

If you are running on a server with a local database, SSD drives and fast CPUs, you should get results similar to the following:

Performance test results: Time taken to import KB: 130 seconds. Corresponds with interactive performance. Time taken to generate 4000 records: 115 seconds. Corresponds to scalability for write operations. Time taken to render 40000 reports from generated records: 480 seconds. Corresponds to scalability for read operations.

Reduce the Amount of Transferred Data

Table and record view loading times are determined in part by the number of records and fields loaded onto the screen at one time. Each of these represents a piece of data transferred from the database to the browser. If you experience slow loading times in particular tables or when opening certain kinds of records, these tips can help to reduce the data transfer and improve performance:

  • Optimize table views. Click Set View > Edit and minimize the amount of data displayed in a table view as follows:
    • On the Fields tab, only show those fields that you really need to see.
    • On the General tab, display fewer records per page, such as 25 rather than the maximum of 150.
  • Set table refresh rates to Never. Limit how often view data is refreshed by navigating to Setup > Access > Manage Teams > (Edit a Team), then set the Table View Refresh Rate to Never. Even with Never selected, the table view refreshes each time a user navigates back to the table. Note that this also affects the systems timeout behavior for user inactivity; if table refreshes are enabled the user's session will never time out.
  • Reduce the amount of data transferred when records are edited. Related tables in a record layout can slow down record opening. To minimize this impact:
    • Move any related tables or embedded search result tables off of the first Layout tab so they can load in the background.
    • If you have linked fields displaying as a value list with more than 50 values, change the display type to a box with lookup so the list doesn't have to be loaded.

Use a Fast Browser

Different browsers provide vastly different performance. Firefox is the recommended browser and is available on all platforms, except for mobile devices. Safari and Chrome are as fast as Firefox and are fully supported. IE11 is significantly faster than IE9 and fully supported, but still slower than Firefox, Safari or Chrome.

Disable Unnecessary Browser Add-ons

The following add-ons are known to cause performance problems in  Agiloft with the Firefox browser:

  • Firebug
  • ColorZilla
  • Adblock Plus

Using Email Fields to Define Ownership

If you wish to use an email field to define table ownership, navigate to Setup > Email and SMS > Configure Inbound Email > edit or add new account > Record Mapping tab. There is an option here to store email addresses and names in separate fields:

When the email address and name are stored separately, the system will parse email record ownership, preventing errors due to name-sensitive email recognition, and preventing issues with indexing. 

Do Not Pass Unnecessary Records to an If-then-else Action in a Time-based Rule

Passing a large number of records to an if-then-else action which then ignores 99% of them can cripple performance. The reason is that each record must be read from the database, with all  Agiloft functionality applied to it, before it is passed to the If-then-else action. Happily, there is a simple solution - create a saved search on the Condition tab of the rule that will only find records that some action within the if-then-else will actually operate on.

For example, the following rule is very inefficient if there are a lot of open records, but only a few that are Urgent or Critical.

Condition: Find all open records

Action:

  • If (Priority == Critical) Then Send Escalation Email to Team Leader
  • Else If (Priority == Urgent) Then Send Escalation Email to Team Else do not do anything

The result of the above rule is that all the records in the table are opened and parsed, then ignored unless the Priority is Urgent or Critical.

It is much more efficient if the rule is constructed with:

Condition: Find all open records where Priority == Critical or Priority == Urgent

Tune the Database

If you are comfortable with databases and have a good feel for the makeup of the data and what searches are most frequently exercised, it can be very helpful to add database indexes. If you are unsure or need the changes to be made outside of working hours, it may be worth using our consulting services to take care of this task.

If you often search on a field such as Assigned To or Status and the number of matching records is much lower than the number of records in the knoweldgebase, you should be able to improve performance by adding a database index for that field.

For example, adding an index to the Assigned To field will speed performance if there are 100 records assigned to a particular team or individual in a database of 100,000 records, but if there are 50,000 records assigned to that individual, it will not help at all and may actually hurt performance.

Although database indexes can be very effective, you have to be careful because:

  1. Database locking makes the table unavailable while you are adding the index. Depending upon the number of records in the table, this may take between a couple of minutes and several hours for tables with hundreds of thousands of records.
  2. Inappropriate or excessive indexes will actually hurt performance. In general, you should not have more than 7 or 8 indexes per table.In some cases, such as searches for "Assigned to == $global.my_Full_Name and Status == Open", it may be most efficient to add a compound index.

SQL Statements Take a Long Time to Execute

To find out what SQL statements are taking a long time to execute, login to the mysql console as follows:

  1. Using the host/port/password from /etc/EnterpriseWizardConfig.xml file: 

     <databaseServerHome>/usr/local/EnterpriseWizard/mysql</databaseServerHome>
    <databaseAdminLogin>root</databaseAdminLogin>
    <databaseAdminPassword>123456</databaseAdminPassword>
    <databaseAddress>127.0.0.1</databaseAddress>
    <databasePort>3333</databasePort>
  2. Run the command: 

     /usr/local/EnterpriseWizard/mysql/bin/mysql -u root --host IP_ADDRESS --port=PORT -p sw2_std -A 
  3. This will typically be: 

     /usr/local/EnterpriseWizard/mysql/bin/mysql -u root --host 127.0.0.1 --port=3333 -p sw2_std -A 
     (MySQL will prompt you to enter the password that you found in /etc/EnterpriseWizardConfig.xml)
  4. Run the following command to list all SQL statements running for more than 15 seconds:  

     select * from information_schema.processlist where command != 'Sleep' and time>15;

Large Number of tmp Tables Are Created

The reason that a large number of tmp tables can be created is because there may be no covering index for a query in a table. The  Agiloft optimizer tries to find a covering index for the SQL query turned on for the table, and informs the MySql optimizer about such index, or logs a warning to the server.log about the absence of such an index to the server.log. Detailed activities of the optimizer can be obtained by turning on OptimizationLog debug output. This will cause information to be written to ewoptimizer.log.

To do it you need change INFO level to DEBUG level at .../jboss/server/sw/conf/log4j.xml at next sections - no jboss restart is required:

<appender name="EWOptimizer"> 
<param name="File" value="${jboss.server.home.dir}/log/ewoptimizer.log"/> 
<param name="DatePattern" value="'.'yyyy-MM-dd"/>
<!-- Rollover at midnight each day --> 
<param name="Append" value="true"/> 
<param name="Threshold" value="INFO"/> 
<layout> 
<param name="ConversionPattern" 
value="%d %p %x %.25c:\n%m%n\n"/> 
</layout> 
</appender> 

and

<category name="OptimizationLog">
<priority value="INFO"/> 
<appender-ref ref="EWOptimizer"/> 
</category> 


After one minute the ewoptimizer.log will start being populated. The most interesting parts are successes and failures.

Successes

 ===========USER:akbaheti=====================
(((linked_id=852)&&(linked_table=company))&&(TEXT~=_E_W_%))&&((type<<261)||(((((type<<262,263)&&((f_from!=abc)||
(f_from=com.supportwizard.filters.parameters.NullParameter@38018f91)))&&((linked_table!=contacts.employees)||
((MultiValueLFSelectionCriteria{condition=1, swSavedSearchID=1757, internalSelectionCriteria=null, fieldName='f_to'})||
(f_from=$global.my_login))))||((type<<259,260)&&(f_from=abc)))||((type<<259,260)&&((linked_table!=contacts.employees)||
((MultiValueLFSelectionCriteria{condition=1, swSavedSearchID=1757, internalSelectionCriteria=null, fieldName='f_to'})||
(f_from=$global.my_login))))))
--------------------------------
OptimizationInformation{optimizationPossible=true, joinsNumber=2, possibleJoins={f_from=4}, impossibilityWarning='null',
columnDBNames=[id, linked_table, f_from, type, linked_id], selectedIndexDBNames2JoinColumns={cvr_2=[f_from]}, asIS=true} 

Here we see a final query internal representation, subject to optimization, and the result of the  Agiloft optimizer. The Query is some combination of used saved searches, and various groups permission filtration. In different situations, more filters may be added automatically.

  • optimizationPossible=true, means success
  • joinsNumber=2, number of joins at the final query
  • possibleJoins={f_from=4}, possible joins
  • impossibilityWarning='null', null for success.
  • columnDBNames=[id, linked_table, f_from, type, linked_id], columns used at the query
  • selectedIndexDBNames2JoinColumns={cvr_2=[f_from]}, selected covering index with fat field to join optimizations
  • asIS=true, internal flag.

Failures

 =====EWOptimizationFail======USER:rmansfor=====================
((((linked_table=1545)&&(linked_id=328059))&&(((type=260)&&((email_type=1)||(outgoing_email_type=1)))||(type=261)))&&
(-TEXT-~=_E_W_%))&& 
((type<<260,261)||(((((type<<262,263)&&((f_from!=abc)||
(f_from=com.supportwizard.filters.parameters.NullParameter@38018f91)))&& 
((linked_table!=contacts.employees)||
((MultiValueLFSelectionCriteria{condition=1, swSavedSearchID=1757, internalSelectionCriteria=null, fieldName='f_to'})||
(f_from=$global.my_login))))||
((type=259)&&(f_from=abc)))||((type=259)&&((linked_table!=contacts.employees)|| 
((MultiValueLFSelectionCriteria{condition=1, swSavedSearchID=1757, internalSelectionCriteria=null, fieldName='f_to'})||(f_from=$global.my_login)))))) 
-------------------------------- 
OptimizationInformation{optimizationPossible=false, joinsNumber=0, possibleJoins={}, 
impossibilityWarning='There is not a covering index for the query. 
Sorting (first, in order) fields: ([]), 
Required fields: ([id, email_type, linked_table, f_from, outgoing_email_type, type, linked_id]), 
Optional fields: ({}).',columnDBNames=[], selectedIndexDBNames2JoinColumns={}, asIS=true}

impossibilityWarning is the only informative part.

Sorting (first, in order) fields: ([]),  covering index should start with these columns.

Required fields: ([id, email_type, linked_table, f_from, outgoing_email_type, type, linked_id]), fields from the query.

Optional fields: ({}). set of fat fields where join optimization can be applied.

How to Construct Optimal Indexes

As described above, the optimum index for queries reported in the ewoptimizer.log, or obtained from the MySQL admin report on long running queries is generally one that includes the fields from the query, in the same order that they were specified in the query. Please note that creating the best index does require some experience and judgment; if it did not, the standard would be for the database engines to create indexes automatically.

Load Testing and Performance Optimization

We provide consulting services to load test and optimize your application. These range from reviewing the log files and creating/modifying indexes, to simulating user load and fine-tuning performance based upon this load. The optimization process can range from one day to several weeks, depending on whether you want us to simply review log files and create or modify indexes based on the resulting information, or to create test programs that simulate production use and precisely measure the number of concurrent users that your server can support.