Cloud Platform

Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Post your ideas

Start by posting ideas and requests to enhance a product or service. Take a look at ideas others have posted and upvote them if they matter to you,

  1. Post an idea

  2. Upvote ideas that matter most to you

  3. Get feedback from the IBM team to refine your idea

Help IBM prioritize your ideas and requests

The IBM team may need your help to refine the ideas so they may ask for more information or feedback. The offering manager team will then decide if they can begin working on your idea. If they can start during the next development cycle, they will put the idea on the priority list. Each team at IBM works on a different schedule, where some ideas can be implemented right away, others may be placed on a different schedule.

Receive a notification on the decision

Some ideas can be implemented at IBM, while others may not fit within the development plans for the product. In either case, the team will let you know as soon as possible. In some cases, we may be able to find alternatives for ideas which cannot be implemented in a reasonable time.

If you encounter any issues accessing the Ideas portals, please send email describing the issue to ideasibm@us.ibm.com for resolution.

For more information about IBM's Ideas program visit ibm.com/ideas.

Status Future consideration
Categories Performance Tester
Created by Guest
Created on Aug 22, 2019

Additional error handling functionality for unattended test runs

As originally discussed on the forums, it was suggested to submit a RFE. (https://www.ibm.com/developerworks/community/forums/html/topic?id=7c067db9-35f8-4936-8d86-c29d9848a9e8)

We are looking for a solution to automatically stop/exit the test when run via schedule if it reaches a certain level of failures. In Performance Requirement there is an option for this but no option to add an action if it falls below a threshold. Likewise in the Error Handling tab it does not show an option for test level, only iteration level actions.

One way to handle this could be to trigger an exit based on a net total of errors instead of percentage. Having a variety of parameters to choose from would give us the flexibility to design this exit based on specific status codes (e.g. unhealthy page count, 500s, VP failures, etc.). Also, selecting what time frame should be measured (e.g. full test, moving time frame), and what happens when the threshold is reached (e.g. exit test, exit/kill agent).

Currently, we would like to have an option to exit the test for unhealthy number of pages up to either a certain number of failures total or during a specific time frame.
We'd be happy to discuss/brainstorm ideas with your team.

Idea priority High
Use Case
This would help with our command run tests that are automatically scheduled over the weekend. We are trying to avoid issues as a result of a schedule running whose test(s) generate a lot of errors and therefore freeze up RPT (due to large amount of errors captured). If run unattended without this functionality, it could potentially generate a significant amount of errors/alerts to application teams.
RFE ID 135525
RFE URL
RFE Product Rational Performance Tester
  • Guest
    Feb 4, 2020

    Although the theme of this request is consistent with our business strategy, it is not committed to the release that is currently under development.

  • Guest
    Oct 21, 2019

    Following up again on the meeting to discuss design of this feature.

  • Guest
    Sep 4, 2019

    Our team would be happy to work with you to discuss design. Feel free to set up a conference call to discuss further.

  • Guest
    Sep 3, 2019

    We are interested in pursuing this capability and would like to engage with you for a design thinking session to ensure we fully implement what is required in a manner that is useful to yourselves and others.