Wednesday, 18 May 2022

 Loop the values of a correlated variable


A ForEach controller loops through the values of a set of related variables. When you add samplers (or controllers) to a ForEach controller, every sample (or controller) is executed one or more times, where during every loop the variable has a new value. The input should consist of several variables, each extended with an underscore and a number. Each such variable must have a value. So for example when the input variable has the name inputVar, the following variables should have been defined:

  • inputVar_1 = wendy
  • inputVar_2 = charles
  • inputVar_3 = peter
  • inputVar_4 = john

Note: the "_" separator is now optional.

When the return variable is given as "returnVar", the collection of samplers and controllers under the ForEach controller will be executed 4 consecutive times, with the return variable having the respective above values, which can then be used in the samplers.




In this example, we created a Test Plan that sends a particular HTTP Request only once and sends another HTTP Request to every link that can be found on the page.

Figure 7 - ForEach Controller Example
Figure 7 - ForEach Controller Example

We configured the Thread Group for a single thread and a loop count value of one. You can see that we added one HTTP Request to the Thread Group and another HTTP Request to the ForEach Controller.

After the first HTTP request, a regular expression extractor is added, which extracts all the html links out of the return page and puts them in the inputVar variable

In the ForEach loop, a HTTP sampler is added which requests all the links that were extracted from the first returned HTML page.


Friday, 28 January 2022

Performance Tuning: Garbage Collection

Spark runs on the Java Virtual Machine (JVM). Because Spark can store large amounts of data in memory, it has a major reliance on Java’s memory management and garbage collection. Therefore, garbage collection can be a major issue that can affect many Spark applications.


Common symptoms of excessive Garbage Collection in Spark are:
#Application speed.
#Executor heartbeat timeout.
#garbage collection overhead limit exceeded error.


The first step in Garbage Collection tuning is to collect statistics by choosing – verbose while submitting spark jobs.
In an ideal situation we try to keep GC overheads < 10% of heap memory.
The Spark execution engine and Spark storage can both store data off-heap. 
You can switch on off-heap storage using the following commands:
–conf spark.memory.offHeap.enabled = true
–conf spark.memory.offHeap.size = Xgb.
If using RDD-based applications, use data structures with fewer objects. For example, use an array instead of a list.
If you are dealing with primitive data types, consider using specialized data structures like Koloboke or fastutil. These structures optimize memory usage for primitive types.
Be careful when using off-heap storage as it does not impact on-heap memory size, i.e. it won’t shrink heap memory. So, to define an overall memory limit, assign a smaller heap size.
If you are using #sparksql , try to use the built-in functions as much as possible, instead of writing new UDFs. Mostly Spark UDFs can work on UnsafeRow and don’t need to convert to wrapper data types. This avoids creating garbage, also it plays well with code generation.
Remember we may be working with billions of rows. If we create even a small temporary object with 100-byte size for each row, it will create 1 billion * 100 bytes of garbage.

Monday, 10 January 2022

Performance Testing and Engineering Knowledge Repository

 

A complete solution for 

Performance Testing and Engineering Knowledge Repository

https://github.com/santhoshjsh/PTPEKR

Friday, 26 July 2019

How to Use Grafana to Monitor JMeter Non-GUI Results

How to Integrate JMeter with Grafana


1.Install and Configure InfluxDB


First of all, we need a JMeter performance script to test.

As soon as we have the performance script in place, we need to take care of the InfluxDB and Grafana installation.

First of all, we need to install InfluxDB as a permanent storage space for our performance metrics.

1.Download InfluxDB and install:- https://portal.influxdata.com/downloads/
2.Unzip the setup file (influxdb-1.8.0_windows_amd64) for windows
3.Run influxd (application file by double-clicking it) it will up and running.

4.Run influx to execute the commands.

To verify that InfluxDB is up and running, all you need to do is to open a terminal window and run this command:- influx

If the installation was completed successfully and the database is up and running, you will see an InfluxDB command-line interface. This can be used for interacting with the database.

By using the ‘SHOW DATABASES’ command, you can see the list of all existing InfluxDB databases. If you have just installed InfluxDB you should see only one ‘_internal’ database, which is used for keeping different stats about database itself.

At this point, we can create a new database to store our performance metrics. For that you need to be logged in influx command-line interface and run this command: CREATE DATABASE jmeter

After that you should see your newly created database, by using the same ‘SHOW DATABASES’ command we used in the previous step:


Once we have created a database for our metrics, we need to make a few changes to the InfluxDB configuration. The configuration file is located at this location:
"<Rootfloder>\influxdb-1.7.7_windows_amd64\influxdb-1.7.7-1\influxdb.conf"

In this configuration file you need to find, uncomment and edit the ‘[[graphite]]’ category appropriately:

[[graphite]]
  # Determines whether the graphite endpoint is enabled.
  enabled = true
  database = "jmeter"
  retention-policy = ""
  bind-address = ":2003"
  protocol = "tcp"
  consistency-level = "one"
  batch-size = 5000
  batch-pending = 10
  batch-timeout = "1s"
  udp-read-buffer = 0
  separator = "."

After that you need to restart InfluxDB by applying an edited configuration:

"<Rootfloder>\influxdb-1.7.7_windows_amd64\influxdb-1.7.7-1>influxd -config influxdb.conf"

Congratulations! We have completed the first step of our long road to establish the integration of JMeter with Grafana monitoring. Now it’s time to push the metrics into the database we created.

Push Performance Metrics from JMeter to InfluxDB


To push performance metrics from JMeter to InfluxDB, we need to use the Backend Listener.  This listener enables writing metrics directly to the database.
Let’s add the Backend Listener to our performance script:

  • Backend Listener implementation - this is an implementation class that will be used as a listener for JMeter test metrics. The value for this parameter is based on the protocol we are going to use. If you remember, we used the graphite protocol configuration specified to the InfluxDB configuration. For this, we need to use the ‘GraphiteBackendListenerClient’

There are different types to send metrics to InfluxDB 

Type1:-  Jmeter Load Test DashBoard

Configure the Backend Listener:



Type2:-  Apache Jmeter DashBoard using Core 


Configure the Backend Listener:


Type 3: Jmeter Dashboard
Configure the Backend Listener:


  • Once the configuration is in place, we can run our test execution.

After the test execution is completed, we can check the InfluxDB and verify that our metrics were reported there successfully. To do so, open the InfluxDB command line interface again and use this command:

> USE jmresults
> SHOW MEASUREMENTS
> SELECT * FROM “jmeter.all.a.avg”

We should find metrics with a timestamp and an appropriate value:



Now that we see that all metrics were reported successfully from JMeter to InfluxDB, we are ready for the last step - visualize reported metrics using Grafana.

Monitoring Performance Metrics in Grafana


First of all, let’s install Grafana on our local machine:- https://grafana.com/grafana/download

After that, Grafana should be available on http://localhost:3000. Use ‘admin’ as default username and password to log in.

First of all, we need to specify the data source with our metrics. Click on “Add data source” on the welcome page:




On the next page put the appropriate configuration based on our previous steps, and click on the “Add” button to verify that Grafana can connect to InfluxDB:



Now we can import our first dashboard in Grafana. Open the Grafana menu by clicking on the top left button and go to Dashboards -> Import:


then type dashboard id:5496 or 4026 or 1152 and click on Load