Software Testing (Tester): Responsible for testing and evaluating the software applications and systems that make computers function.
- Design and executive test plans on computer applications.
- Record and document results and compare to expected results.
- Detect software failures so that defects may be discovered and corrected.
- Generate historical analysis of test results.
- Document anomalies and issues.
- Maintain a database of software defects.
- Examine the code and execution of code in various environments.
- Verify specific action or function of the code.
- Operate and maintain test networks.
- Provide application instructions for users.
- Develop and document application test plans based on software requirements and technical specifications.
- Create meaningful error handling procedures for application code.
- Ensure compliance with general programming best practices, accepted web standards and those standards set forth by upstream sources.
- Perform application security audits, reviews, walkthroughs, or inspections.
- Implement application designs; create queries, scripts, web pages and other deliverables.
- Participate in application planning meetings.
- Ensure data integrity standards.
HIGHLY VARIES WITH THE JOB DESCRIPTION ACCORDING TO THE COMPANY.
Entry level:
- Prior V&V(Verification &Validation) experience.
- Experienced designing and coding unit tests for complex cyber-physical systems.
- Skilled at developing test plans, software stress testing, and designing large scale integration tests.
- Prior experience in programming in C/C++/Python.
- Prior experience in developing automated testing pipelines and release management systems.
- Skilled in isolating, documenting, and tracking issues systematically.
- Analysis of test results and automated reporting systems.
- Knowledge of software infrastructure testing, automated test processes, agile software development process.
- Knowledge of software tools such as Integrated Development Environments like Eclipse.
Mid-level:
- Smart Bear Test Complete, Test Execute, Java, Web Applications, Agile Development Processes, SQL.
- Experience with Git, Jenkins, Unix Shell scripting.
- Minimum 3 years of proven testing experience as a Software Tester.
- Minimum 2 years experience using Test Complete OR minimum 3 years experience using automated software testing and JavaScript.
- 1+ years designing and implementing test plans, scripts, and scenarios.
- Experience in reviewing user requirements and documentation, test plan development, developing test data, transforming test plans into test scripts.
- Experience in using TestComplete.
- Experience analyzing test results to include documenting conclusions and making recommendations.
Senior level:
- 7+ years proficient experience working on software products / mobile applications in QA/Automation related testing.
- 3+ years’ experience in C++ in a production environment.
- Thorough understanding of all test fundamentals, SDLC, test management tools, and defect tracking tools.
- Experience with mobile applications and/or embedded system testing.
- Excellent communication, problem-solving, debugging and troubleshooting skills to root cause complex issues.
Certification:
- Foundation Certificate in Software Testing ISEB/ISTQB(International Software Testing Qualifications Board).
- IAT(Implicit Association Test) III (CISA(Certified Information Systems Auditor), GSE(GIAC(Global Information Assurance Certification) Security Expert), SCNA(Security Certified Network Architect), CISSP(Certified Information Systems Security Professional) or Associate).
- ITIL(Information Technology Infrastructure Library) Certification.
- Certifications / Licenses: Certified Tester – Foundation Level ( CTFL), Certified Software Test Engineer ( CSTE), Certified Software Quality Analyst ( CSQA), Certified Software Test Professional (CSTP), Certified Manager of Software Testing (CMST).
INTERVIEW QUESTIONS:
Q. Describe software review and formal technical review (FTR).
Software reviews work as a filter for the software process. It helps to uncover errors and defects in software. Software reviews enhance the quality of the software. Software reviews refine the software, including requirements and design models, code, and testing data. A formal technical review (FTR) is a software quality control activity. In this activity, software developer and other team members are involved. The objectives of an FTR are:
- Uncover the errors.
- Verify that the software under technical review meets its requirements.
- To ensure that the software must follow the predefined standards.
- To make projects more manageable.
The FTR includes walkthroughs and inspections. Each FTR is conducted as a normal meeting. FTR will be successful only if it is properly planned and executed.
Q. What are the attributes of a good test case?
- A good test has a high probability of finding an error. To find the maximum error, the tester and developer should have a complete understanding of the software and attempt to check all the conditions that how the software might fail.
- A good test is not redundant. Every test should have a different purpose from other; otherwise, the tester will repeat the testing process for the same condition.
- A good test should be neither too simple nor too complex. In general, each test should be executed separately. If we combine more than one test into one test case, it might be very difficult to execute. Sometimes we can combine tests but it may hide some errors.
Q. Explain Structure-based testing techniques.
- Structure-based testing techniques are also termed as white-box testing techniques, dynamic, use the internal structure of the software to derive test cases.
- They are usually termed as ‘white-box’ or ‘glass-box’ techniques
Q. Explain component testing.
Component testing is also termed as unit, module or program testing. It looks for defects in the software and verifies its functioning. It can be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Mostly stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. The stub is called from the software component to be tested while a driver calls a component to be tested.
Q. Differentiate between re-testing and regression testing.
Regressions Testing: It means testing new bugfixes to ensure that they don’t cause problems to occur involving problems fixed earlier. The process here involves running a suite of tests.
Re-testing: It means testing a single defect that was just fixed. Only one test is performed here. The target is to ensure that the defect that was just fixed was actually fixed properly.
Q. How do drivers and stubs relate to manual testing?
- Drivers and stubs are a part of incremental testing.
- The two approaches used in incremental testing are: the top down and the bottom up methods.
- Drivers are used for the bottom-up approach.
- Drivers are the modules that run the components that are being tested.
- A stub is used for the top-down approach.
- It is a replacement of sorts for a component which is used to test a component that it calls.
Q. What is Sanity testing?
Sanity testing is used to ensure that multiple or conflicting functions or variables do not exist in the system. It verifies that the components of the application can be compiled without a problem. It is conducted on all parts of the application.
Q. What is Ad-hoc testing?
It is a type of testing that is performed without the use of planning and/or documentation. These tests are run only one time unless a defect is found. If a defect is found, testing can be repeated. It is considered to be a part of exploratory testing.
Q. What is Smoke testing?
Smoke testing covers all of the basic functionality of the application. It is considered as the main test for checking the functionality of the application. It does not test the finer details of the application.
Q. Explain the following.
- Compatibility testing: It is a non-functional test performed on a software system or component for checking its compatibility with the other parts in the computing environment. This environment covers the hardware, servers, operating system, web browsers, other software, etc.
- Integration testing: This test is performed to verify the interfaces between system components, interactions between the application and the hardware, file system, and other software. Ideally, an integration testing team should perform it.
Q. What are 5 common problems in the software development process?
Bad requirements – these requirements are unclear, incomplete, too general, or not testable. They cause problems.
Unrealistic schedule – expecting too much result in too little time.
Inadequate testing – lack of testing causes a problem as no one knows if the system will behave as expected.
Adding new features – after development; quite common.
Poor communication – within the team or with the customer.
Q. What do you mean by “Software Quality”?
- Free of bugs.
- Delivered on time.
- Within the budget.
- Meets requirements.
- Is easily maintainable.
Q. Differentiate between exception and validation testing.
Validation testing is done to test the software in conformance to the requirements specified. It aims to demonstrate that the software works in a manner that is expected by the customer.
Exception testing deals with handling the exceptions. Basically, this testing involves how to change the control flow of the AUT (Application Under Test) when an exception arises.
Q. When should you stop testing?
It is pretty difficult to determine as many modern software applications are so complex and run in such an interdependent environment that complete testing is never possible.
However, common factors which help in deciding when to stop are:
- Deadlines to release or tests are over.
- Test cases are completed with certain percentage passed.
- Test budget is over.
- Coverage of code/functionality/requirements has reached a specified point.
- Bug rate is below a certain level.
- Beta or alpha testing period has ended.
Q. What are the usual difficulties that testers face during software testing?
- Incomplete or unclear specifications.
- Limited resources.
- Lack of planning regarding the testing approach.
- Unclear priorities.
- Lack of time for testing.
Q. What causes bugs in the software?
- Unclear requirements.
- Poor documentation.
- Constantly changing requirements- Programming errors.
- Lack of time for testing.
Q. Which type of software should not be automated?
- Unstable/ Incomplete software as they are still undergoing changes.
- Test scripts which are run once in a while.
- Code and document review.