Sreejobs Logo
   For Latest Job Updates  
sreenutech.com Online Exams
Home Govt Jobs IT Fresher Jobs IT Exp Jobs IT Walk-Ins MBA Jobs BPO Jobs Register FAQ’s Post Job Contact Us
Hot Jobs
Manual Testing FAQs

1.What makes a good QA or Test manager?
A good QA, test, or QA/Test(combined) manager should:
1. be familiar with the software development process
2. be able to maintain enthusiasm of their team and promote a positive atmosphere, despite
3. what is a somewhat 'negative' process (e.g., looking for or preventing problems)
4. be able to promote teamwork to increase productivity
5. be able to promote cooperation between software, test, and QA engineers
6. have the diplomatic skills needed to promote improvements in QA processes
7. have the ability to withstand pressures and say 'no' to other managers when quality is insufficient or QA processes are not being adhered to
8. have people judgement skills for hiring and keeping skilled personnel
8. be able to communicate with technical and non-technical people, engineers, managers, and customers.
9. be able to run meetings and keep them focused
He should be able to identify the test strategy based on the scope of the release to minimise the stress on the team.

2.What if there is not enough time for thorough testing?
Use risk analysis to determine where testing should be focused.
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. This requires judgement skills, common sense, and experience. Considerations can include:
- Which functionality is most important to the project's intended purpose.
- Which functionality is most visible to the user.
- Which functionality has the largest safety impact.
- Which functionality has the largest financial impact on users.
- Which aspects of the application are most important to the customer.
- Which aspects of the application can be tested early in the development cycle.
- Which parts of the code are most complex, and thus most subject to errors.
- Which parts of the application were developed in rush or panic mode.
- Which aspects of similar/related previous projects caused problems.
- Which aspects of similar/related previous projects had large maintenance expenses.
- Which parts of the requirements and design are unclear or poorly thought out.
- What do the developers think are the highest-risk aspects of the application.
- What kinds of problems would cause the worst publicity.
- What kinds of problems would cause the most customer service complaints.
- What kinds of tests could easily cover multiple functionalities.
- Which tests will have the best high-risk-coverage to time-required ratio.

3.What if an organization is growing so fast that fixed QA processes are impossible?

This is a common problem in the software industry, especially in new technology areas. There is no easy solution in this situation, other than:
-Hire good people
-Management should 'ruthlessly prioritize' quality issues and maintain focus on the customer
-Everyone in the organization should be clear on what 'quality' means to the customer

When time is a crunch, Project team might opt for Agile Testing. In Agile Testing, testers are a part of requirement/development discussions and have a good amount of knowledge about system under test. One of the major characteristics of Agile testing is small and frequent releases, and good amount of regression on already deployed part. Daily scrum meetings are a key factor, and development and testing go hand in hand with very little documentation. If the team encounters a defect, development and business team is immediately informed and everyone works on it together.

4.What if the project is not big enough to justify extensive testing?
Consider the impact of project errors, not the size of the project. However, if extensive  testing is still not justified, risk analysis is again needed. The tester might then do adhoc testing, or write up a limited test plan based on the risk analysis.
Understanding the customer requirement is the key here.....

5.What should be done after a bug is found?
The bug needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn't create problems elsewhere. If a problem-tracking system is in place, it should encapsulate these processes. A variety of commercial problem-tracking/management software tools are available (see the 'Tools' section for web resources with listings of such tools). The following are items to consider in the tracking process:
-Complete information such that developers can understand the bug, get an idea of it's severity, and reproduce it if necessary.
-Bug identifier (number, ID, etc.)
-Current bug status (e.g., 'Released for Retest', 'New', etc.)
-The application name or identifier and version
-The function, module, feature, object, screen, etc. where the bug occurred
-Environment specifics, system, platform, relevant hardware specifics
-Test case name/number/identifier
-One-line bug description
-Full bug description
-Description of steps needed to reproduce the bug if not covered by a test case or if the developer doesn't have easy access to the test case/test script/test tool
-Names and/or descriptions of file/data/messages/etc. used in test
-File excerpts/error messages/log file excerpts/screen shots/test tool logs that would be helpful in finding the cause of the problem
-Severity estimate (a 5-level range such as 1-5 or 'critical'-to-'low' is common)
-Was the bug reproducible?
-Tester name
-Test date
-Bug reporting date
-Name of developer/group/organization the problem is assigned to
-Description of problem cause
-Description of fix
-Code section/file/module/class/method that was fixed
-Date of fix
-Application version that contains the fix
-Tester responsible for retest
-Retest date
-Retest results
-Regression testing requirements
-Tester responsible for regression tests
-Regression testing results
A reporting or tracking process should enable notification of appropriate personnel at various stages. For instance, testers need to know when retesting is needed, developers need to know when bugs are found and how to get the needed information, and reporting/summary capabilities are needed for managers.
1. The first stp after bug is found is t rerodce it an note down the stpes for reroductin ver clearly.
2. Then the bugs description and its stpes for reprodcingshould be psoted in tool used by team.
3. If possible peer review of the bugs should be done.
4. Bug posted should b communicaed to team or if tool has inbulit feature of cimmunicating the bugs, that is also fine.
5. After the bug is fixed, should be tken fo esting the same and any other related functionlities to the posed bugs efore closing the bug.

6.What makes a good Software QA engineer?
The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.
Good QA should have:
1. Problem solving
2. Good memory power
3. Communication skills
4. No attitude
5. Good knowledge on Processes
6. Ability to understand user problems clearly
7. Capable to suggest solutions by explaining advantages of implementation

7.What is a 'test plan'?
 A software project test plan is a document that describes the objectives, scope, approach, and focus of a software testing effort. The process of preparing a test plan is a useful way to think through the efforts needed to validate the acceptability of a software product. The completed document will help people outside the test group understand the 'why' and 'how' of product validation. It should be thorough enough to be useful but not so thorough that no one outside the test group will read it. The following are some of the items that might be included in a test plan, depending on the particular project:
  1. Title
  2. Identification of software including version/release numbers
  3. Revision history of document including authors, dates, approvals
  4. Table of Contents
  5. Purpose of document, intended audience
  6. Objective of testing effort
  7. Software product overview
  8. Relevant related document list, such as requirements, design documents, other test plans, etc.
  9. Relevant standards or legal requirements
  10. Traceability requirements
  11. Relevant naming conventions and identifier conventions
  12. Overall software project organization and personnel/contact-info/responsibilties
  13. Test organization and personnel/contact-info/responsibilities
  14. Assumptions and dependencies
  15. Project risk analysis
  16. Testing priorities and focus
  17. Scope and limitations of testing
  18. Test outline - a decomposition of the test approach by test type, feature, functionality, process, system, module, etc. as applicable
  19. Outline of data input equivalence classes, boundary value analysis, error classes 20. Test environment - hardware, operating systems, other required software, data configurations, interfaces to other systems
  21. Test environment validity analysis - differences between the test and production systems and their impact on test validity.
  22. Test environment setup and configuration issues
  23. Software migration processes
  24. Software CM processes
  25. Test data setup requirements
  26. Database setup requirements
  27. Outline of system-logging/error-logging/other capabilities, and tools such as screen capture software, that will be used to help describe and report bugs
  28. Discussion of any specialized software or hardware tools that will be used by testers to help track the cause or source of bugs
  29. Test automation - justification and overview
  30. Test tools to be used, including versions, patches, etc.
  31. Test script/test code maintenance processes and version control
  32. Problem tracking and resolution - tools and processes
  33. Project test metrics to be used
  34. Reporting requirements and testing deliverables
  35. Software entrance and exit criteria
  36. Initial sanity testing period and criteria
  37. Test suspension and restart criteria
  38. Personnel allocation
  39. Personnel pre-training needs
  40. Test site/location
  41. Outside test organizations to be utilized and their purpose, responsibilties, deliverables, contact persons, and coordination issues
  42. Relevant proprietary, classified, security, and licensing issues.
  43. Open issues
It is a strategic document which contains some information that describes how to perform testing an application in an effective, efficient and optimised way. It is a project level term which is specific for a particular project.

8.How can Software QA processes be implemented without stifling productivity?
By implementing QA processes slowly over time, using consensus to reach agreement on processes, and adjusting and experimenting as an organization grows and matures, productivity will be improved instead of stifled. Problem prevention will lessen the need for problem detection, panics and burn-out will decrease, and there will be improved focus and less wasted effort. At the same time, attempts should be made to keep processes simple and efficient, minimize paperwork, promote computer-based processes and automated tracking and reporting, minimize time required in meetings, and promote training as part of the QA process. However, no one - especially talented technical types - likes rules or bureacracy, and in the short run things may slow down a bit. A typical scenario would be that more days of planning and development will be needed, but less time will be required for late-night bug-fixing and calming of irate customers.

9.What's the role of documentation in QA?
Critical. QA practices should be documented such that they are repeatable. Specifications, designs, business rules, inspection reports, configurations, code changes, test plans, test cases, bug reports, user manuals, etc. should all be documented. There should ideally be a system for easily finding and obtaining documents and determining what documentation will have a particular piece of information. Change management for documentation should be used if possible. 

10.What is verification? validation?
Verification typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings. Validation typically involves actual testing and takes place after verifications are completed. The term 'IV & V' refers to Independent Verification and Validation.
Verification :- IS the product build right.
Validation :- Is the right product build.

11.What steps are needed to develop and run software tests?
The following are some of the steps to consider:
  1. Obtain requirements, functional design, and internal design specifications and other necessary documents
  2. Obtain budget and schedule requirements
  3. Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
  4. Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
  5. Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
  6. Determine test environment requirements (hardware, software, communications, etc.)
  7. Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
  8. Determine test input data requirements
  9. Identify tasks, those responsible for tasks, and labor requirements
  10. Set schedule estimates, timelines, milestones
  11. Determine input equivalence classes, boundary value analyses, error classes
  12. Prepare test plan document and have needed reviews/approvals
  13. Write test cases
  14. Have needed reviews/inspections/approvals of test cases
  15. Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving  processes, set up or obtain test input data
  16. Obtain and install software releases
  17. Perform tests
  18. Evaluate and report results
  19. Track problems/bugs and fixes
  20. Retest as needed
  21. Maintain and update test plans, test cases, test environment, and testware through life cycle

 The following are some of the steps to consider:

-Obtain requirements, functional design, and internal design specifications and other necessary documents
- Obtain budget and schedule requirements
- Determine project-related personnel and their responsibilities, reporting requirements, required standards and processes (such as release processes, change processes, etc.)
- Determine project context, relative to the existing quality culture of the organization and business, and how it might impact testing scope, aproaches, and methods.
- Identify application's higher-risk aspects, set priorities, and determine scope and limitations of tests
- Determine test approaches and methods - unit, integration, functional, system, load, usability tests, etc.
- Determine test environment requirements (hardware, software, communications, etc.)
- Determine testware requirements (record/playback tools, coverage analyzers, test tracking, problem/bug tracking, etc.)
- Determine test input data requirements
- Identify tasks, those responsible for tasks, and labor requirements
- Set schedule estimates, timelines, milestones
- Determine input equivalence classes, boundary value analyses, error classes
- Prepare test plan document and have needed reviews/approvals
- Write test cases
- Have needed reviews/inspections/approvals of test cases
- Prepare test environment and testware, obtain needed user manuals/reference documents/configuration guides/installation guides, set up test tracking processes, set up logging and archiving processes, set up or obtain test input data
- Obtain and install software releases
- Perform tests
- Evaluate and report results
- Track problems/bugs and fixes
- Retest as needed
- Maintain and update test plans, test cases, test environment, and testware through life cycle

12.How can it be known when to stop testing?
This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends

This can be difficult to determine. Many modern software applications are so complex, and run in such an interdependent environment, that complete testing can never be done. Common factors in deciding when to stop are:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Test budget depleted
- Coverage of code/functionality/requirements reaches a specified point
- Bug rate falls below a certain level
- Beta or alpha testing period ends
13.How does a client/server environment affect testing?
Client/server applications can be quite complex due to the multiple dependencies among clients, data communications, hardware, and servers. Thus testing requirements can be extensive. When time is limited (as it usually is) the focus should be on integration and system testing. Additionally, load/stress/performance testing may be useful in determining client/server application limitations and capabilities. There are commercial tools to assist with such testing.

14.What if the application has functionality that wasn't in the requirements?
It may take serious effort to determine if an application has significant unexpected or hidden functionality, and it would indicate deeper problems in the software development process. If the functionality isn't necessary to the purpose of the application, it should be removed, as it may have unknown impacts or dependencies that were not taken into account by the designer or the customer. If not removed, design information will be needed to determine added testing needs or regression testing needs. Management should be made aware of any significant added risks as a result of the unexpected functionality. If the functionality only effects areas such as minor improvements in the user interface, for example, it may not be a significant risk.
That functionality need to be discussed within the team (QA and Development) to measure the impact on the application and also indicating the risk. if there is not enough time left then better to remove the function and declare it has enhancement for the future release.

15.How can World Wide Web sites be tested?
Web sites are essentially client/server applications - with web servers and 'browser' clients. Consideration should be given to the interactions between html pages, TCP/IP communications, Internet connections, firewalls, applications that run in web pages (such as applets, javascript, plug-in applications), and applications that run on the server side (such as cgi scripts, database interfaces, logging applications, dynamic page generators, asp, etc.). Additionally, there are a wide variety of servers and browsers, various versions of each, small but sometimes significant differences between them, variations in connection speeds, rapidly changing technologies, and multiple standards and protocols. The end result is that testing for web sites can become a major ongoing effort. Other considerations might include:
- What are the expected loads on the server (e.g., number of hits per unit time?), and what kind of performance is required under such loads (such as web server response time, database query response times). What kinds of tools will be needed for performance testing (such as web load testing tools, other tools already in house that can be adapted, web robot downloading tools, etc.)?
- Who is the target audience- What kind of browsers will they be using? What kind of connection speeds will they by using? Are they intra- organization (thus with likely high connection speeds and similar browsers) or Internet-wide (thus with a wide variety of connection speeds and browser types)?
? What kind of performance is expected on the client side (e.g., how fast should pages appear, how fast should animations, applets, etc. load and run)?
? Will down time for server and content maintenance/upgrades be allowed? how much?
? What kinds of security (firewalls, encryptions, passwords, etc.) will be required and what is it expected to do? How can it be tested?
? How reliable are the site's Internet connections required to be? And how does that affect backup system or redundant connection requirements and testing?
? What processes will be required to manage updates to the web site's content, and what are the requirements for maintaining, tracking, and controlling page content, graphics, links, etc.?
? Which HTML specification will be adhered to? How strictly? What variations will be allowed for targeted browsers?
? Will there be any standards or requirements for page appearance and/or graphics throughout a site or parts of a site??
? How will internal and external links be validated and updated? how often?
? Can testing be done on the production system, or will a separate test system be required? How are browser caching, variations in browser option settings, dial-up connection variabilities, and real-world internet 'traffic congestion' problems to be accounted for in testing?
? How extensive or customized are the server logging and reporting requirements; are they considered an integral part of the system and do they require testing?
? How are cgi programs, applets, javascripts, ActiveX components, etc. to be maintained, tracked, controlled, and tested?
Some sources of site security information include the Usenet newsgroup 'comp.security.announce' and links concerning web site security in the 'Other Resources' section.
Some usability guidelines to consider - these are subjective and may or may not apply to a given situation (Note: more information on usability testing issues can be found in articles about web site usability in the 'Other Resources' section):
? Pages should be 3-5 screens max unless content is tightly focused on a single topic. If larger, provide internal links within the page.
? The page layouts and design elements should be consistent throughout a site, so that it's clear to the user that they're still within a site.
? Pages should be as browser-independent as possible, or pages should be provided or generated based on the browser-type.
? All pages should have links external to the page; there should be no dead-end pages.
? The page owner, revision date, and a link to a contact person or organization should be included on each page.
to check the load of website in diffrent browsers ,to test the stress testing to using how can be load control of website
16.What is 'Software Testing'?
Testing involves operation of a system or application under controlled conditions and evaluating the results (eg, 'if the user is in interface A of the application while using hardware B, and does C, then D should happen'). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'. (See the Bookstore section's 'Software Testing' category for a list of useful books on Software Testing.)
? Organizations vary considerably in how they assign responsibility for QA and testing. Sometimes they're the combined responsibility of one group or individual. Also common are project teams that include a mix of testers and developers who work closely together, with overall QA processes monitored by project managers. It will depend on what best fits an organization's size and business structure.
Software Testing is the process used to help identify the correctness, completenes & quality of developed computer software.

17.What makes a good test engineer?
A good test engineer has a 'test to break' attitude, an ability to take the point of view of the customer, a strong desire for quality, and an attention to detail. Tact and diplomacy are useful in maintaining a cooperative relationship with developers, and an ability to communicate with both technical (developers) and non-technical (customers, management) people is useful. Previous software development experience can be helpful as it provides a deeper understanding of the software development process, gives the tester an appreciation for the developers' point of view, and reduce the learning curve in automated test tool programming. Judgment skills are needed to assess high-risk areas of an application on which to focus testing efforts when time is limited.
A good test engineer should be able to find as many bugs as possile and find them as early as possible.

18.What is 'configuration management'?
Configuration management covers the processes used to control, coordinate, and track: code, requirements, documentation, problems, change requests, designs, tools/compilers/libraries/patches, changes made to them, and who makes the changes.
The various combinations of components that make up a product are known as configurations. Configuration management is responsible for the management of all product-related specifications and ensures structured handling of all development process work results.

19.What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.
Quality Software is one which meets the customer requirement for the first time and everytime.

20.What is Extreme Programming and what's it got to do with testing?
Extreme Programming (XP) is a software development approach for small teams on risk-prone projects with unstable requirements. It was created by Kent Beck who described the approach in his book 'Extreme Programming Explained' is a core aspect of Extreme Programming. Programmers are expected to write unit and functional test code first - before the application is developed. Test code is under source control along with the rest of the code. Customers are expected to be an integral part of the project team and to help develope scenarios for acceptance/black box testing. Acceptance tests are preferably automated, and are modified and rerun for each of the frequent development iterations. QA and test personnel are also required to be an integral part of the project team. Detailed requirements documentation is not used, and frequent re-scheduling, re-estimating, and re-prioritizing is expected.

21.What is a 'test case'?
A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.
Note that the process of developing test cases can help find problems in the requirements or design of an application, since it requires completely thinking through the operation of the application. For this reason, it's useful to prepare test cases early in the development cycle if possible.
Test case can be defined as Set of input parameters for software is tested.

22.How is testing affected by object-oriented designs?
Well-engineered object-oriented design can make it easier to trace from code to internal design to functional design to requirements. While there will be little affect on black box testing (where an understanding of the internal design of the application is unnecessary), white-box testing can be oriented to the application's objects. If the application was well-designed this can simplify test design.

23.What if the software is so much bugs it can't really be tested at all?
The best bet in this situation is for the testers to go through the process of reporting whatever bugs or blocking-type problems initially show up, with the focus being on critical bugs. Since this type of problem can severely affect schedules, and indicates deeper problems in the software development process (such as insufficient unit testing or insufficient integration testing, poor design, improper build or release procedures, etc.) managers should be notified, and provided with some documentation as evidence of the problem.
Before receiving official code delivery from Dev team - testing team should provide some very high priority test to Dev team called as smoke test OR minimum acceptance test cases - with the intention of breadth wise coverage rather depth wise coverage across all new development and to be delivered functional areas. Again the intention should be clear that Dev team is not supposed to do QA job but to evaluate code against testing scenarios by QA/Testing group. Development team should have onus to deliver code and publish test results to QA and project management. This could prevent show stopper on day 1 of QA testing and can avoid unnecessary schedule impacts.

In case development does not get buy in with above approach. QA should work using Agile methodology where defect identified and impacting test execution should be published to development team and stake holders with expected fix date for show stoppers/critical defects at the end of everyday. Also triage meeting can be hold with functional, Dev and QA folks and arrive at conclusion of defect impacts/clarification and expected fix date.

24.Why we perform stress-testing, resolution-testing and cross- browser testing?
Stress Testing: - We need to check the performance of the application.
Def: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements
Resolution Testing: - Some times developer created only for 1024 resolution, the same page displayed a horizontal scroll bar in 800 x 600 resolutions. No body can like the horizontal scroll appears in the screen. That is reason to test the Resolution testing.

Cross-browser Testing: - This testing some times called compatibility testing. When we develop the pages in IE compatible, the same page is not working in Fairfox or Netscape properly, because
most of the scripts are not supporting to other than IE. So that we need to test the cross-browser Testing

25.What is TRM?
TRM means Test Responsibility Matrix.

TRM: --- It indicates mapping between test factors and development stages...

Test factors like:
Ease of use, reliability, portability, authorization, access control, audit trail, ease of operates, maintainable... Like dat...
Development stages...
Requirement gathering, Analysis, design, coding, testing, and maintenance
26.What is a Test Server?

the place where the developers put their development modules, which are accessed by the testers to test the functionality.
27.At what phase tester role starts?
In SDLC after complition of FRS document the test lead prepare the use case document and test plan document, then the tester role is start.
The role of the tester begins from the Test Design phase.

28.In what basis you will write test cases?
I would write the Test cases based on Functional Specifications and BRDs and some more test cases using the Domain knowledge.
i will write the test cases by understanding the user requirements clearly after that i write the test cases
29.What is an inspection?
An inspection is more formalized than a 'walkthrough', typically with 3-8 people including a moderator, reader, and a recorder to take notes. The subject of the inspection is typically a document such as a requirements spec or a test plan, and the purpose is to find problems and see what's missing, not to fix anything. Attendees should prepare for this type of meeting by reading through the document. Most problems will be found during this preparation. The result of the inspection meeting should be a written report. Thorough preparation for inspections is difficult, painstaking work, but is one of the most cost effective methods of ensuring quality. Employees who are most skilled at inspections are like the 'eldest brother' in the parable in their skill may have low visibility but they are extremely valuable to any software development organization, since bug prevention is far more cost-effective than bug detection.
Inspection is technique used for doing verification ,it is a thorough word by word checking of the software by considering some checklist ,with the intension of finding defects.

30.Explain agile testing?
Agile testing is used whenever customer requirements are changing dynamically

If we have no SRS, BRS but we have test cases does you execute the test cases blindly or do you follow any other process.

Test case would have detail steps of what the application is supposed to do.
1) Functionality of application.
2) In addition you can refer to Backend, is mean look into the Database. To gain more knowledge of the application.
31.What are the main key components in Web applications and client and Server applications? (differences)?
For Web Applications: Web application can be implemented using any kind of technology like Java, .NET, VB, ASP, CGI& PERL. Based on the technology,We can derive the components.
Let's take Java Web Application. It can be implemented in 3 tier architecture. Presentation tier (jsp, html, dthml,servlets, struts). Busienss Tier (Java Beans, EJB, JMS) Data Tier(Databases like Oracle, SQL Server etc., )
If you take .NET Application, Presentation (ASP, HTML, DHTML), Business Tier (DLL) & Data Tier ( Database like Oracle, SQL Server etc.,)
Client Server Applications: It will have only 2 tiers. One is Presentation (Java, Swing) and Data Tier (Oracle, SQL Server). If it is client Server architecture, the entire application has to be installed on the client machine. When ever you do any changes in your code, Again, It has to be installed on all the client machines. Where as in Web Applications, Core Application will reside on the server and client can be thin Client(browser). Whatever the changes you do, you have to install the application in the server. NO need to worry about the clients. Because, You will not install any thing on the client machine.

32.What is a Use case?
A simple flow between the end user and the system. It contains pre conditions, post conditions, normal flows and exceptions. It is done by Team Lead/Test Lead/Tester.
use cases are nothing but user manuals i.e the way to interact with the software application, its a pictorial representation.


33.What does black-box testing mean at the unit, integration, and system levels?
Tests for each software requirement using
Equivalence Class Partitioning, Boundary Value Testing, and more
Test cases for system software requirements using the Trace Matrix, Cross-functional Testing, Decision Tables, and more
Test cases for system integration for configurations, manual operations, etc.

34.For Web Applications what type of tests are you going to do?
Web-based applications present new challenges, these challenges include:
- Short release cycles;
- Constantly Changing Technology;
- Possible huge number of users during initial website launch;
- Inability to control the user's running environment;
- 24-hour availability of the web site.
The quality of a website must be evident from the Onset. Any difficulty whether in response time, accuracy of information, or ease of use-will compel the user to click to a competitor's site. Such problems translate into lost of users, lost sales, and poor company image.

To overcome these types of problems, use the following techniques:
1. Functionality Testing
Functionality testing involves making Sure the features that most affect user interactions work properly. These include:
- forms
- searches
- pop-up windows
- shopping carts
- online payments

2. Usability Testing
Many users have low tolerance for anything that is difficult to use or that does not work. A user's first impression of the site is important, and many websites have become cluttered with an increasing number of features. For general-use websites frustrated users can easily click over a competitor's site.

Usability testing involves following main steps
- identify the website's purpose;
- identify the indented users ;
- define tests and conduct the usability testing
- analyze the acquired information

3. Navigation Testing
Good Navigation is an essential part of a website, especially those that are complex and provide a lot of information. Assessing navigation is a major part of usability Testing.

4. Forms Testing
Websites that use forms need tests to ensure that each field works properly and that the forms posts all data as intended by the designer.

5. Page Content Testing
Each web page must be tested for correct content from the user perspective for correct content from the user perspective. These tests fall into two categories: ensuring that each component functions correctly and ensuring that the content of each is correct.

6. Configuration and Compatibility testing
A key challenge for web applications is ensuring that the user sees a web page as the designer intended. The user can select different browser software and browser options, use different network software and on-line service, and run other concurrent applications. We execute the application under every browser/platform combination to ensure the web sites work properly under various environments.

7. Reliability and Availability Testing
A key requirement o a website is that it Be available whenever the user requests it, after 24-hours a day, every day. The number of users accessing web site simultaneously may also affect the site's availability.

8. Performance Testing
Performance Testing, which evaluates System performance under normal and heavy usage, is crucial to success of any web application. A system that takes for long to respond may frustrate the user who can then quickly move to a competitor's site. Given enough time, every page request will eventually be delivered. Performance testing seeks to ensure that the website server responds to browser requests within defined parameters.

9. Load Testing
The purpose of Load testing is to model real world experiences, typically by generating many simultaneous users accessing the website. We use automation tools to increases the ability to conduct a valid load test, because it emulates thousand of users by sending simultaneous requests to the application or the server.

10. Stress Testing
Stress Testing consists of subjecting the system to varying and maximum loads to evaluate the resulting performance. We use automated test tools to simulate loads on website and execute the tests continuously for several hours or days.

11. Security Testing
Security is a primary concern when communicating and conducting business- especially sensitive and business- critical transactions - over the internet. The user wants assurance that personal and financial information is secure. Finding the vulnerabilities in an application that would grant an unauthorized user access to the system is important.

35.Smoke test? Do you use any automation tool for smoke testing?
Testing the application whether it's performing its basic functionality properly or not, so that the test team can go ahead with the application. Definitely can use.
smoke testing is nothing but when after completion of build the dev team release that build to testing dept.then one of our t.es will check whether that build is proper for conducting detailed testing or not..this is called smoke or build acceptance or build verification testing.means the t.e.s duty in the testing process is smoke testing.
but in some companies treat like this:
before releasing that build to testing dept. the dev. team will conduct test whether it is proper for conduct detail testing or not is called smoke testing and after receiving it testing dept. whatever they conduct testing is called as sanity testing
this difference is depends on their comanies
but this is not at all a problem in interviews with cool language u can simply say like this to recruiter.


36.Define Brain Stromming and Cause Effect
BS:
A learning technique involving open group discussion intended to expand the range of available ideas
OR
A meeting to generate creative ideas. At PEPSI Advertising, daily, weekly and bi-monthly brainstorming sessions are held by various work groups within the firm. Our monthly I-Power brainstorming meeting is attended by the entire agency staff.
OR
Brainstorming is a highly structured process to help generate ideas. It is based on the principle that you cannot generate and evaluate ideas at the same time. To use brainstorming, you must first gain agreement from the group to try brainstorming for a fixed interval (eg six minutes).

CEG:
A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases. It has a beneficial side effect in pointing out incompleteness and ambiguities in specifications.

37.Password is having 6 digit alphanumeric then what are the possible input conditions?
Including special characters also Possible input conditions are:
1) Input password as = 6abcde (ie number first)
2) Input password as = abcde8 (ie character first)
3) Input password as = 123456 (all numbers)
4) Input password as = abcdef (all characters)
5) Input password less than 6 digit
6) Input password greater than 6 digits
7) Input password as special characters
8) Input password in CAPITAL ie uppercase
9) Input password including space
10) (SPACE) followed by alphabets /numerical /alphanumerical/

38.what is the formal functionality?
what are the performance testing we have done in our project?

The main point included in test cases they are..
1. test case id 2. Objective 3. Pre-requisition 4. steps. 5. TestData 6. Expected result.
These 6 steps are imported at the time of to write the test case. The test case is that we write it for to find out the bugs or errors in software. There is difference between actual result and expected result.

39.Explain Software metrics?
Measurement is fundamental to any engineering discipline
Why Metrics?
- We cannot control what we cannot measure!
- Metrics helps to measure quality
- Serves as dash-board
The main metrices are :size,shedule,defects.In this there are main sub metrices.
Test Coverage = Number of units (KLOC/FP) tested / total size of the system
Test cost (in %) = Cost of testing / total cost *100
Cost to locate defect = Cost of testing / the number of defects located
Defects detected in testing (in %) = Defects detected in testing / total system defects*100
Acceptance criteria tested = Acceptance criteria tested / total acceptance criteria

40.What is Testing environment in your company, means how testing process start?
Testing process is going as follows:
Quality assurance unit
Quality assurance manager
Test lead
Test engineer

41.What is internationalization Testing?
Software Internationalization is process of developing software products independent from cultural norms, language or other specific attributes of a market

42.What the main use of preparing a traceability matrix?
Traceability matrix is prepared in order to cross check the test cases designed against each requirement, hence giving an opportunity to verify that all the requirements are covered in testing the application.
(Or)
To Cross verify the prepared test cases and test scripts with user requirements. To monitor the changes, enhance occurred during the development of the project.
IT is document which contain table of linking information for tracing back for reference in any kind of confusions and questionability situations

43.How you are breaking down the project among team members?
It can be depend on these following cases----
1) Number of modules
2) Number of team members
3) Complexity of the Project
4) Time Duration of the project
5) Team member's experience etc......

44.What is Bug life cycle?
New: when tester reports a defect
Open: when developer accepts that it is a bug or if the developer rejects the defect, then the status is turned into "Rejected"
Fixed: when developer make changes to the code to rectify the bug...
Closed/Reopen: when tester tests it again. If the expected result shown up, it is turned into "Closed" and if the problem persists again, it's "Reopen".
For a Valid Defect -
1. New
2. Open
3. Ready for Retest/ Fixed
4. If the issue is fixed, then change the status to close else reopen the defect and inform the dev team regarding the same to rework on the defect.
5. Final Status is Closed

For a invalid Defect: -
1. New.
2. Open.
3. Rejected.

45.What is the minimum criteria for white box?
We should know the logic, code and the structure of the program or function. Internal knowledge of the application how the system works what's the logic behind it and structure how it should react to particular action.

46.What are the technical reviews?
For each document, it should be reviewed. Technical Review in the sense, for each screen, developer will write a Technical Specification. It should be reviewed by developer and tester. There are functional specification review, unit test case review and code review etc.

47.When a bug is found, what is the first action?
Report it in bug tracking tool.

48.What are the differences between these three words Error, Defect and Bug?
Error: The deviation from the required logic, syntax or standards/ethics is called as error.
There are three types of error. They are:
Syntax error (This is due to deviation from the syntax of the language what supposed to follow).
Logical error (This is due to deviation from the logic of the program what supposed to follow)
Execution error (This is generally happens when you are executing the same program, that time you get it.)
Defect: When an error found by the test engineer (testing department) then it is called defect

Bug: if the defect is agreed by the developer then it converts into bug, which has to fix by the developer or post pond to next version.
error: the problem is identified by a programmer is called an error

defect: the problem is identified by a test engineer is called a defect

bug: if the defect is accepted by a developer is called a bug

failure: if the problem is identified by a user is called failure.

49.What is Software reliability?
It is the probability that software will work without failure for a specified period of time in a specified environment.Reliability of software is measured in terms of Mean Time Between Failure (MTBF). For eg if MTBF = 10000 hours for an average software, then it should not fail for 10000 hours of continous operation.

50.What is the difference between Product-based Company and Projects-based Company?
Product based company develops the applications for Global clients i.e. there is no specific clients. Here requirements are gathered from market and analyzed with experts.
Project based company develops the applications for the specific client. The requirements are gathered from the client and analyzed with the client.if a software is developed for specific customer requirements, then that s/w called as project/application
if a software is developed for overall requirements in a market, then that s/w called as project/application.


51.Give an example of high priority and low severity, low priority and high severity?
Severity level:
The degree of impact the issue or problem has on the project. Severity 1 usually means the highest level requiring immediate attention. Severity 5 usually represents a documentation defect of minimal impact.
Severity is levels:
-Critical: the software will not run
-High: unexpected fatal errors (includes crashes and data corruption)
-Medium: a feature is malfunctioning
-Low: a cosmetic issue

Severity levels
1. Bug causes system crash or data loss.
2. Bug causes major functionality or other severe problems; product crashes in obscure cases.
3. Bug causes minor functionality problems, may affect "fit anf finish".
4. Bug contains typos, unclear wording or error messages in low visibility fields.
Severity levels

-High: A major issue where a large piece of functionality or major system component is completely broken. There is no workaround and testing cannot continue.
-Medium: A major issue where a large piece of functionality or major system component is not working properly. There is a workaround, however, and testing can continue.
-Low: A minor issue that imposes some loss of functionality, but for which there is an acceptable and easily reproducible workaround. Testing can proceed without interruption.

Severity and Priority

Priority is Relative: the priority might change over time. Perhaps a bug initially deemed P1 becomes rated as P2 or even a P3 as the schedule draws closer to the release and as the test team finds even more heinous errors. Priority is a subjective evaluation of how important an issue is, given other tasks in the queue and the current schedule. It?s relative. It shifts over time. And it?s a business decision.

Severity is an absolute: it?s an assessment of the impact of the bug without regard to other work in the queue or the current schedule. The only reason severity should change is if we have new information that causes us to re-evaluate our assessment. If it was a high severity issue when I entered it, it?s still a high severity issue when it?s deferred to the next release. The severity hasn?t changed just because we?ve run out of time. The priority changed.

Severity Levels can be defined as follow:

S1 - Urgent/Showstopper. Like system crash or error message forcing to close the window.
Tester's ability to operate the system either totally (System Down), or almost totally, affected. A major area of the users system is affected by the incident and it is significant to business processes.

S2 - Medium/Workaround. Exist like when a problem is required in the specs but tester can go on with testing. Incident affects an area of functionality but there is a work-around which negates impact to business process. This is a problem that:
a) Affects a more isolated piece of functionality.
b) Occurs only at certain boundary conditions.
c) Has a workaround (where "don't do that" might be an acceptable answer to the user).
d) Occurs only at one or two customers. or is intermittent

S3 - Low. This is for minor problems, such as failures at extreme boundary conditions that are unlikely to occur in normal use, or minor errors in
layout/formatting. Problems do not impact use of the product in any substantive way. These are incidents that are cosmetic in nature and of no or very low impact to business processes.


52.What are non-functional requirements?
The non-functional requirements of a software product are: reliability, usability, efficiency, delivery time, software development environment, security requirements, standards to be followed etc.
53.What is the maximum length of the test case we can write?


We can't say exactly test case length, it depending on functionality.
54.What is test plan and explain its contents?
Test plan is a document which contains the scope for testing the application and what to be tested, when to be tested and who to test.
test plan contents are:
1) Introduction
2) coverage of testing
3) test strategy
4) Base criteria
5) Test deliverable
6) Test environment
7) scheduling
8) Staffing and Training
9) Risks and solution plan
10) Assumptions
11) Approval information

55.What is stub? Explain in testing point of view?
Stub is a dummy program or component, the code is not ready for testing, it's used for testing...that means, in a project if there are 4 modules and last is remaining and there is no time then we will use dummy program to complete that fourth module and we will run whole 4 modules also. The dummy program is also known as stub.
stub is nothing but when we testing the modules any mandatory module is missing then its replaced by a temporary program is called stub

56.Diff. between STLC and SDLC?

STLC is software test life cycle it starts with

-Preparing the test strategy.
-Preparing the test plan.
-Creating the test environment.
-Writing the test cases.
-Creating test scripts.
-Executing the test scripts.
-Analyzing the results and reporting the bugs.
-Doing regression testing.
-Test exiting.

SDLC is software or system development life cycle, phases are...

-Project initiation.
-Requirement gathering and documenting.
-Designing.
-Coding and unit testing.
-Integration testing.
-System testing.
-Installation and acceptance testing. " Support or maintenance.
stlc means s/w testing life cycle contains phases like:
test planning
test development
test execution
result analysis
bug tracking
reporting

sdlc means s/w development life cycle contains:
requirements gathering
analysis
design
coding
testing
delivery and maintainence


57.Advantages of automation over manual testing?
Time saving, resource and money.we test the application through manual then the reliability is not 100% because if we do testing 1000 times then we tested only 500 to 600 times but in Automation,its give you a exactly output. Though automation we do faster,repeatable, comprehensive testing.


58.What is deferred status in defect life cycle?
Deferred status means the developer accepted the bus, but it is scheduled to rectify in the next build.whenever the developer accept the defect and he wants to rectify that in later so he can put status as deffered.


59.What are the main bugs which were identified by you and in that how many are considered as real bugs?
If you take one screen, let's say, it has got 50 Test conditions, out of which, I have identified 5 defects which are failed. I should give the description defect, severity and defect classfication. All the defects will be considered.

Defect Classification are:
GRP : Graphical Representation
LOG : Logical Error
DSN : Design Error
STD : Standard Error
TST : Wrong Test case
TYP : Typographical Error (Cosmotic Error)


60.There are two sand clocks(timers) one complete totally in 7 minutes and other in 9-minutes we have to calculate with this timers and bang the bell after completion of 11- minutes!plz give me the solution.
1. Start both clocks
2. When 7 min clock complete, turn it so that it restarts.
3. When 9 min clock finish, turn 7 min clocks (It has 2 mints only).
4. When 7 min clock finishes, 11 min complete.


61.What is the formal technical review?
Technical review should be done by the team of members. The document, which is going to be reviewed, who has prepared and reviewers should sit together and do the review of that document. It is called Peer Review. If it is a technical document, It can be called as formal Technical review, I guess. It varies depends on the company policy.
62.Verification and validation?
Verification is static. No code is executed. Say, analysis of requirements etc.
Validation is dynamic. Code is executed with scenarios present in test cases.
verification is a process of checking conducted on each and every role in the organization in order to conform whether they are doing there work according to company's process guidelines or not

Validation is a process of checking conducted on the developed product(or)its related parts in order to conform whether they are working according to there expectations(or)not

63.Actually how many positive and negetive testcases will write for a module?
That depends on the module and complexity of logic. For every test case, we can identify +ve and -ve points. Based on the criteria, we will write the test cases, If it is crucial process or screen. We should check the screen,in all the boundary conditions.

64.What is Six sigma?
Six Sigma:
A quality discipline that focuses on product and service excellence to create a culture that demands perfection on target, every time.
Six Sigma quality levels
Produces 99.9997% accuracy, with only 3.4 defects per million opportunities.
Six Sigma is designed to dramatically upgrade a company's performance, improving quality and productivity. Using existing products, processes, and service standards,
They go for Six Sigma MAIC methodology to upgrade performance.
MAIC is defined as follows:
Measure: Gather the right data to accurately assess a problem.
Analyze: Use statistical tools to correctly identify the root causes of a problem
Improve: Correct the problem (not the symptom).
Control: Put a plan in place to make sure problems stay fixed and sustain the gains.

Key Roles and Responsibilities:

The key roles in all Six Sigma efforts are as follows:
Sponsor: Business executive leading the organization.
Champion: Responsible for Six Sigma strategy, deployment, and vision.
Process Owner: Owner of the process, product, or service being improved responsible for long-term sustainable gains.
Master Black Belts: Coach black belts expert in all statistical tools.
Black Belts: Work on 3 to 5 $250,000-per-year projects; create $1 million per year in value.
Green Belts: Work with black belt on projects.

65.What are cookies? Tell me the advantage and disadvantage of cookies?

Cookies are messages that web servers pass to your web browser when you visit Internet sites. Your browser stores each message in a small file. When you request another page from the server, your browser sends the cookie back to the server. These files typically contain information about your visit to the web page, as well as any information you've volunteered, such as your name and interests. Cookies are most commonly used to track web site activity. When you visit some sites, the server gives you a cookie that acts as your identification card. Upon each return visit to that site, your browser passes that cookie back to the server. In this way, a web server can gather information about which web pages are used the most, and which pages are gathering the most repeat hits. Only the web site that creates the cookie can read it. Additionally, web servers can only use information that you provide or choices that you make while visiting the web site as content in cookies. Accepting a cookie does not give a server access to your computer or any of your personal information. Servers can only read cookies that they have set, so other servers do not have access to your information. Also, it is not possible to execute code from a cookie, and not possible to use a cookie to deliver a virus.

66.What is mean by release notes?
It's a document released along with the product which explains about the product. It also contains about the bugs that are in deferred status.

67.What is Test Data Collection?
Test data is the collection of input data taken for testing the application. Various types and size of input data will be taken for testing the applications. Sometimes in critical application the test data collection will be given by the client also.

68.What is software 'quality'?
Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.

69. How you will know when to stop testing?
Testing will be stopped when we came to know that there are only some minnor bugs which may not effect the functionality of the application and when all the test cases has benn executed sucessfully.

70. What are the metrics generally you use in testing?
These software metrics will be taken care by SQA team
Ex:rate of deffect efficiency

71. What is ECP and how you will prepare test cases?
It is a software testing related technique which is used for writing test cases.it will break the range into some wqual partitions.the main purpose of this tecnique is
1) To reduce the no. of test cases to a necessary minimun.
2) To select the right test cases to cover all the senarios.

72. Test Plan contents? Who will use this doc?
Test plan is a document which contains scope,risk analysis.
for every sucess there should be some plan,like that for geting some quality product proper test plan should be there.

The test plan Contents are:
1) introduction
a) Overview
b) Achronyms
2) Risk Analysis
3) Test items
4) Features and funtions to be test
5) Features and funtions not to be test
6) Test statergy
7) test environment
8) system test shedule
9) test delivarables
10) Resources
11) re-sumptiom and suspension criteria
12) staff and training

73. What are Test case preparation guidelines?
Requirement specifications and User interface documents(sreen shots of application)

74. How u will do usability testing explain with example?
Mainly to check the look and feel,ease of use,gui(colours,fonts,allignment),help manuals and complete end to end navigation.

75. What is Functionality testing?
Here in this testing mainly we will check the functionality of the application whether its meet the customer requirements are not.
Ex:1+1 =2.

76. Which SDLC you are using?
V model

77. Explain V & V model?
Verification and Validation Model.

78. What are the acceptance criteria for your project?
It will be specified by the customer what is his acceptance criteria.Ex:if so and so functionality has worked enough for me.

79. Who will provide the LOC to u?
Loc (lines of code) it will depend on the any company standards they are following.

80. How u will report the bugs?
by using Bug tracking tool like Bugzilla,test director again it may depends on company,some companies they may use own tool.

81. Explain your organizations testing process?
1) SRS
2) Planning
3) Test senario design
4) Test case design
5) Execution
6) Bug Reporting
7) maintainance

82.What’s the Alpha Testing ?
The Alpha Testing is conducted at the developer sites and in a controlled environment by the end user of the software

83.What’s the Beta Testing ?
Testing the application after the installation at the client place.

84.What is Component Testing ?
Testing of individual software components (Unit Testing).

85.What’s Compatibility Testing ?
In Compatibility testing we can test that software is compatible with other elements of system.

86.What is Concurrency Testing ?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

87.What is Conformance Testing ?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

88.What is Context Driven Testing ?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

89.What is Data Driven Testing ?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

90.What is Conversion Testing ?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

91.What is Dependency Testing ?
Examines an application’s requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

92.What is Depth Testing ?
A test that exercises a feature of a product in full detail.

93.What is Dynamic Testing ?
Testing software through executing it. See also Static Testing.

94.What is Endurance Testing ?
Checks for memory leaks or other problems that may occur with prolonged execution.

95.What is End-to-End testing ?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

96.What is Exhaustive Testing ?
Testing which covers all combinations of input values and preconditions for an element of the software under test.

97.What is Gorilla Testing ?
Testing one particular module, functionality heavily.

98.What is Installation Testing ?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

99.What is Localization Testing ?
This term refers to making software specifically designed for a specific locality.

100.What is Loop Testing ?
A white box testing technique that exercises program loops.

101.What is Mutation Testing ?
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources

102.What is Monkey Testing ?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

103.What is Positive Testing ?
Testing aimed at showing software works. Also known as “test to pass”. See also Negative Testing.

104.What is Negative Testing ?
Testing aimed at showing software does not work. Also known as “test to fail”. See also Positive Testing.

105.What is Path Testing ?
Testing in which all paths in the program source code are tested at least once.

106.What is Performance Testing ?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as “Load Testing”.

107.What is Ramp Testing ?
Continuously raising an input signal until the system breaks down.

108.What is Recovery Testing ?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

09.What is the Re-testing testing ?
Retesting- Again testing the functionality of the application.

110.What is the Regression testing ?
Regression- Check that change in code have not effected the working functionality

111.What is Sanity Testing ?
Brief test of major functional elements of a piece of software to determine if its basically operational.

112.What is Scalability Testing ?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

113.What is Security Testing ?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

114.What is Stress Testing ?
Stress testing is a form of testing that is used to determine the stability of a given system or entity. It involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.

115.What is Smoke Testing ?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

116.What is Soak Testing ?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

117.What’s the Usability testing ?
Usability testing is for user friendliness.

118.What’s the User acceptance testing ?
User acceptance testing is determining if software is satisfactory to an end-user or customer.

119.What’s the Volume Testing ?
We can perform the Volume testing, where the system is subjected to large volume of data.

Describe your QA experience (emphasis on Telecom)
What testing tools have you used? How long?
What is white box, black box?
What testing phases have you participated in?
What’s the difference between functional testing, system test, and UAT?
Have you used testing calendars? In what phase?
When would you perform regression testing?
What would you base your test cases on?
How do you make sure the results are as expected?
What’s more important: Positive or Negative testing?
How would you create test data that is as close as possible to real data?
Do you have experience with Oracle?
Have you used SQL? For what purposes?
What is your knowledge/experience with Unix?
Have you used Unix shell scripts?
Have you written WinRunner scripts?
Have you written LoadRunner scripts?
What tools did you use to log defects?
What do you think is the main challenge in testing software
What other groups did you interact with (developers, users, analysts)
Who would you rather work with?
When you realize the load you have cannot be done in the time given, how would you handle?

Tell us about yourself? (This is to know the person and assess the communication skills)
What is testing life cycle?
Explain SDLC and your involvement?
Tell us the process you follow in your organization?
What is boundary value analysis?
Explain Equivalent Partitioning?
What is bug life cycle?
How to use QC?
What is severity and priority, explain the difference?
When do you stop testing? ( I mean when do you say, testing is done?)
What do you write in a test plan?
What is test strategy?
What is risk analysis?
If you have ‘n’ requirements and you have less time how do you prioritize the requirements?
What all types of testing you could perform on a web based application?
What is smoke testing and what is sanity?
How do you find the regression scenarios if a defect is fixed?
What is the difference between bug, defect and a error?

Tell about the responsibilites of you in your current project?
What is RTM , and how do you use it?
How do you generate reports using QC?
What is Agile metholodogy?
What is V & V method?

Home | Register for Job Updates | Contact Us