Today's Updates:

Software Testing



Table of Contents
  • Testing Overview and its history---Answer 
  • What is testing? Who does testing? When to start testing and when to stop testing?---Answer
  • What is Software Development Life Cycle (SDLC)? Explain with waterfall model.---Answer
  • Spiral model with its advantages and disadvantages---Answer
  • Prototype model or Pilot model---Answer
  • V-model with its advantages and disadvantages---Answer
  • Difference between project and product---Answer
  • Difference between Verification & Validation---Answer
  • White box, Black box and Gray box testing and it differences---Answer
  • Difference between Testing, Quality Assurance and Quality Control---Answer
  • Difference between Review, Inspection and Walk through---Answer
  • Difference between Bug, Error, Defect and Failure---Answer
Testing Types
  1. Functional Testing---Answer
  2. Unit Testing and System Testing---Answer
  3. Integration Testing (Top Down/Bottom Up)---Answer
  4. Regression Testing and Retesting---Answer
  5. Acceptance Testing (α testing and β testing)---Answer
  6. Smoke and sanity testing and its differences---Answer
  7. Adhoc testing and Exploratory testing and its differences---Answer
  8. Difference between Functional Testing and Non-Functional Testing---Answer
  9. Performance Testing---Answer
  10. Usability Testing---Answer
  11. Accessibility Testing---Answer
  12. Link Testing
  13. Compatibility Testing---Answer
  14. Performance Testing (Load/Stress/Soak/Volume)---Answer
  15. Regression Testing (Retesting/Regional Regression testing/Full Regression testing)---Answer
  16. Globalization Testing (I18N Testing/L10N Testing)---Answer
  17. Security Testing---Answer
  18. Penetration Testing---Answer
  19. Reliability Testing---Answer
  20. Web testing and Mutation testing
  21. Portability Testing
  • What are Test Cases? And Test case Design Techniques (Error Guessing/Equivalence partitioning/BVA)
  • Test Case Design Template with characteristics of good test cases
  • Test Case Review process and its Template
  • Test Case Repository with its structure
  • Steps for writing the test cases and Traceability Matrix
  • Software Testing Life Cycle (STLC)
  • Test Plan and Types of Test plan
  • Defect Life Cycle or Bug Life Cycle
  • Difference between Severity and Priority in testing life Cycle
  • CMMi Level certification 

Testing Overview and its history

Overview:
A primary purpose of testing is to detect software failures so that defects may be discovered and corrected. Testing cannot establish that a product functions properly under all conditions but can only establish that it does not function properly under specific conditions. The scope of software testing often includes
examination of code as well as execution of that code in various environments and conditions as well as examining the aspects of code: does it do what it is supposed to do and do what it needs to do. In the current culture of software development, a testing organization may be separate from the development team.
There are various roles for testing team members. Information derived from software testing may be used to correct the process by which software is developed.
Every software product has a target audience. For example, the audience for video game software is completely different from banking software. Therefore, when an organization develops or otherwise invests in a software product, it can assess whether the software product will be acceptable to its end users, its target audience, its purchasers and other stakeholders. Software testing is the process of attempting to make this assessment.
Continue Reading...
-----------------------------------------------------------------------------------------------------------

What is testing? Who does testing? When to start testing and when to stop testing?

What is testing?
Testing is a process of exercising and evaluating a system component by means of manual or automation to ensure that the system is satisfying the customer requirements or not?
OR
Process of identifying the defects is also known as testing. Or In other words software testing is a verification and validation process.

Who does testing?
It depends on the process and the associated stakeholders of the project(s). In the IT 
Continue Reading... 
----------------------------------------------------------------------------------------------------------------------------------
What is Software Development Life Cycle (SDLC)? Explain with waterfall model.

SDLC is a process of developing the software and there are different SDLC are used to develop the software called SDLC models.
The development process adopted for a project will depend on the project aims and goals. There are numerous development life cycles that have been developed in order to achieve different required objectives.
These life cycles range from lightweight and fast methodologies, where time to market is of the essence, through to fully controlled and documented methodologies where quality and reliability are key drivers.
Each of these methodologies has its place in modern software development and the most appropriate development process should be applied to each project. The models specify the various stages of the process and the order in which they are carried out.
The life cycle model that is adopted for a project will have a big impact on the testing that is carried out.
Test activities are related to software development activities. Different development life cycle models need different approaches to testing.
Continue Reading... 
-------------------------------------------------------------------------------------------------------------
    Spiral model with its advantages and disadvantages

    Spiral Model

    In spiral model the project will be in parts.
    The process begins at the center position. From there it moves clockwise in traversals. Each traversal of the spiral usually results in a deliverable. It is not clearly defined what this deliverable is. This changes from traversal to traversal. For example, the first traversals may result in a requirement specification. The second will result in a prototype, and the next one will result in another prototype or sample of a product, until the last traversal leads to a product which is suitable to be sold. Consequently the related activities and their documentation will also mature towards the outer traversals. E.g. a formal design and testing session would be placed into the last traversal.
    Continue Reading...
    -------------------------------------------------------------------------------------------------------------

    Prototype model or Pilot model

    Prototype Model (Pilot Model)

    Prototype module is just a demo application to show client, who does not belongs to the IT sector. It just to show how the application looks after completion whether he wants any changes or as it is.
    •         I.            Prototype is a working model of software with some limited functionality.
    •       II.            The prototype does not always hold the exact logic used in the actual software application and is an extra effort to be considered under effort estimation.
    •     III.            Prototyping is used to allow the users evaluate developer proposals and try them out before implementation.
    •     IV.            It also helps understand the requirements which are user specific and may not have been considered by the developer during product design.

    Following is the step-wise approach to design a software prototype:
    Continue Reading...
    -------------------------------------------------------------------------------------------------------------


    V-model with its advantages and disadvantages
    V model

    V-model is software development model it is improve from the waterfall model. V-model is normally used for manage the software development project in the large company.
    V-model is also called verification and validation model (V&V). It has a very intensive testing in order to eliminate the bug/error that may be occur during each stage of development project using V-model.
    You can see the picture of V-model as below



    Following are the Verification phases in V-Model:
    Continue Reading...
    --------------------------------------------------------------------------------------------------------------

    Difference between Software Project and Software Product

    Project:
    If a software application designs for a specific client, then it is called PROJECT. 
    OR
    If any organization is developing the application according to the client specification then it is called as project

    Product:
    If a software application is design for multiple clients, then it is called a PRODUCT.
    § Example: Windows.
    OR
    If any organization is developing the application and marketing it is called as PRODUCT.

    No
    Software Project
    Software Product
    1
    If a software application is designed for a specific client then it is called Software Project
    If a software application is designed for multiple clients then it is called Software Product
    2
    If any organization is developing the application according the client specifications then it is called Software Project
    If any organization is developing the application and marketing then it is called Software Product
    3
    The requirements are generally captured from the specific customer to build the project
    The software products are build based on a concept or idea, requirements are gathered from multiple customers
    4
    Software source code is owned by the customer
    The product can be sold to multiple customers
    5
    Development life cycle of the project is till the delivery
    Development life cycle of the product is till the product become obsolete.
    6
    Conditional contract will be there for maintenance
    Annual maintenance contract will be there for new versions support

    Note: In project testing we go through the Alpha Testing, Beta Testing, User Acceptance testing whereas in product testing we don't have this.

    --------------------------------------------------------------------------------------------------------------

    Difference between Verification & Validation

    Verification:-
    1.        In general, Verification is defined as “Are we building PRODUCT RIGHT?” i.e., Verification is a process that makes it sure that the software product is developed in the right way. The software should confirm to its predefined specifications, as the product development goes through different stages, an analysis is done to ensure that all required specifications are met.
    2.        During the Verification, the work product (the ready part of the Software being developed and various documentations) is reviewed/examined personally by one or more persons in order to find and point out the defects in it. This process helps in prevention of potential bugs, which may cause in failure of the project.
    1.        Verification process describes whether the outputs are according to inputs or not.
    Continue Reading...
    --------------------------------------------------------------------------------------------------------------

    Difference between White Box, Block Box and Gray Box Testing:

    White Box Testing
    White box testing strategy deals with the internal logic and structure of the code. White box testing is also called as glass, structural, open box or clear box testing. The tests written based on the white box testing strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code etc.

    In order to implement white box testing, the tester has to deal with the code and hence is needed to possess knowledge of coding and logic i.e. internal working of the code. White box test also needs the
    tester to look into the code and find out which unit/statement/chunk of the code is malfunctioning
    Continue Reading...
    --------------------------------------------------------------------------------------------------------------

    Difference between Testing, Quality Assurance and Quality Control

    The terms “quality assurance” and “quality control” are often used interchangeably to refer to ways of ensuring the quality of a service or product. The terms, however, have different meanings.

    Assurance: The act of giving confidence, the state of being certain or the act of making certain.
    Quality Assurance: The planned and systematic activities implemented in a quality system so that quality requirements for a product or service will be fulfilled.


    Control: An evaluation to indicate needed corrective responses; the act of guiding a process in which variability is attributable to a constant system of chance causes.
    Quality Control: The observation techniques and activities used to fulfill requirements for quality.


    Quality Assurance
    Quality Control
    Definition
    QA is a set of activities for ensuring quality in the processes by which products are developed.
    QC is a set of activities   for ensuring quality in products. The activities focus on identifying defects in the actual products produced.
    Focus on
    QA aims to prevent defects with a focus on the process used to make the product. It is a proactive quality process.
    QC aims to identify (and correct) defects in the finished product. Quality control, therefore, is a reactive process.
    Goal
    The goal of QA is to improve development and test processes so that defects do not arise when the product is being developed.
    The goal of QC is to identify defects after a product is developed and before it's released.
    How
    Establish a good quality management system and the assessment of its adequacy. Periodic conformance audits of the operations of the system.
    Finding & eliminating sources of quality problems through tools & equipment so that customer's requirements are continually met.
    What
    Prevention of quality problems through planned and systematic activities including documentation.
    The activities or techniques used to achieve and maintain the product quality, process and service.
    Responsibility
    Everyone on the team involved in developing the product is responsible for quality assurance.
    Quality control is usually the responsibility of a specific team that tests the product for defects.
    Example
    Verification is an example of QA
    Validation/Software Testing is an example of QC
    Statistical Techniques
    Statistical Tools & Techniques can be applied in both QA & QC. When they are applied to processes (process inputs & operational parameters), they are called Statistical Process Control (SPC); & it becomes the part of QA.
    When statistical tools & techniques are applied to finished products (process outputs), they are called as Statistical Quality Control (SQC) & comes under QC.
    As a tool
    QA is a managerial tool
    QC is a corrective tool










































    -------------------------------------------------------------------------------------------------------------

    Difference between Bug, Error, Defect and Failure

    What is an error?
    Error is deviation from actual and expected value.
    It represents mistake made by people.

    What is a fault?
    Fault is incorrect step, process or data definition in a computer program which causes the program to behave in an unintended or unanticipated manner.

    It is the result of the error.

    What is a bug?
    Bug is a fault in the program which causes the program to behave in an unintended or unanticipated manner.
    It is an evidence of fault in the program.

    What is a failure?
    Failure is the inability of a system or a component to perform its required functions within specified performance requirements.
    Failure occurs when fault executes.

    What is a defect?
    A defect is an error in coding or logic that causes a program to malfunction or to produce incorrect/unexpected results.
    Continue Reading...
    --------------------------------------------------------------------------------------------------------------

    Functional Testing

    1.       Functional Testing is a Type of Testing conducted to – test the functionality of the application with respect to the requirements
    2.       Also referred to as Black Box Testing
    3.       Functional test cases are derived from the functional requirements of the software and are the basis for system testing. These help you in testing if the required functionality is working as per the specifications.

    ·         Ensure that the system under test is operational as per the customer requirements.
    ·         Verify whether the application is ready for release.
    ·         Ensure that the system operates as per the user’s expectation.
            Continue Reading...
    ---------------------------------------------------------------------------------------------------------------
    Integration Testing

           Testing the Integration of multiple units / modules
          The objective is to test the application that is built by integrating the unit tested components
          This tests interfaces between components, interactions with different parts of a system such as O.S, hardware and interfaces between systems.
          Testers should concentrate solely on integration itself and not functionality of the individual modules to be integrated.
           Performed by the Developer / Tester
           Various approaches to Integration Testing could be :
          Top Down Integration (test control structures – compiling, loading and so on)
          Bottom Up Integration (allows flexibility in scheduling and parallel testing)
          Big Bang Integration (use controlled / restricted big bang; use it for backbone testing)
          Sandwich Integration (use a combination of all)
    Continue Reading...  
    -------------------------------------------------------------------------------------------

    Unit Testing and System Testing:-
           Testing of Individual Unit (E.g. : Function, Program, Module)
    1.       Also referred to as White Box Testing
    2.       Typically performed by the Developer
    3.       Goal of Unit testing is to uncover defects using formal techniques like:
           Resource-behavior (e.g. memory leaks), performance or robustness testing, as well as structural testing
           Defects and deviations in Date formats
           Special requirements in input conditions (for example Text box where only numeric or alphabets should be entered)
           Selection based on Combo Box’s, List Box’s, Option buttons, Check Box’s
    Continue Reading...
    -----------------------------------------------------------------------------------------------------------

    Regression Testing and Retesting

    Regression Testing
    • It is type of testing carried out to ensure that changes made in the fixes or any enhancement changes are not impacting the previously working functionality.
    • It is executed after enhancement or defect fixes in the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle.
    • Every time after making changing in the existing working code, a suite of test case have to executed to ensure that changes are not breaking working features and not introduced any bugs in the software. It is essential to prepare such test suite & executed on every newer version of software. Also to automate the regression testing this test suite will help to create an automated testing script.
    Continue Reading...
    -----------------------------------------------------------------------------------------------------------  
    User Acceptance Testing with Alpha and Beta Testing


    User Acceptance Testing:

    In the case of software, User acceptance testing (UAT) is the last phase of the software testing process. Acceptance testing performed by the customer is known as User Acceptance Testing (UAT), end-user testing, site (acceptance) testing, or field (acceptance) testing.
    • Conducted just before the software is released to production
    • Tests system's readiness for deployment and use.
    • Performed by the end-users from the client’s team with a certain set of test cases and typical scenarios
    • Goal is to establish confidence in system.
    • Finding defects not the main focus.
    • Demonstrates to the customer that predefined acceptance criteria have been met by the system
    • Acceptance criteria may be define by the customer or approved by the customer
    • Customer may write the acceptance test cases and involve the testers from the vendor company to execute them or testers from the vendor company may produce the acceptance test cases which are approved by the customer

    The customer may demand to:
    1. Run the tests themselves
    2. Witness tests run by the tester
    3. Inspect documented test results

    There are two types of acceptance testing.

     Alpha (α) Testing:-Testing has done by the company in the presence of client at company premises.

     Beta (β) Testing: - Testing done by the end users



    Difference between smoke testing and sanity testing


    Smoke Testing
    Software Testing done to ensure that whether the build can be accepted for through software testing or not. Basically, it is done to check the stability of the build received for software testing.
    In Smoke Testing, the test cases chosen cover the most important functionality or component of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionality of the system are working fine.

    For Example a typical smoke test would be - Verify that the application launches successfully, Check that the GUI is responsive ... etc.

    Sanity Testing
    After receiving a build with minor changes in the code or functionality, a subset of regression test cases are executed that to check whether it rectified the software bugs or issues and no other software bug is introduced by the changes. Sometimes, when multiple cycles of regression testing are executed, sanity testing of the software can be done at later cycles after through regression test cycles. If we are moving a build from staging / testing server to production server, sanity testing of the software application can be done to check that whether the build is sane enough to move to further at production server or not.

    Differences:

    Sl.No
    Smoke Testing
    Sanity Testing
    1
    Smoke Testing is performed to ascertain that the critical functionality of the program is working fine
    Sanity Testing is done to check the new functionality / bugs have been fixed
    2
    Smoke testing is usually documented or scripted
    Sanity testing is usually not documented and is unscripted
    3
    This testing is performed by the developers or testers
    Sanity testing is usually performed by testers
    4
    The objective of this testing is to verify the "stability" of the system in order to proceed with more rigorous testing
    The objective of the testing is to verify the "rationality" of the system in order to proceed with more rigorous testing
    5
    Smoke testing is a subset of Regression testing
    Sanity testing is a subset of Acceptance testing
    6
    Smoke testing is like General Health Check Up
    Sanity Testing is like specialized health check up
    7
    Smoke testing exercises the entire system from end to end
    Sanity testing exercises only the particular component of the entire system
    8
    Smoke testing performed on a particular build is also known as a build verification test.

    Sanity Testing is also called tester acceptance testing.


    What is Adhoc and Exploratory Testing

    Ad-hoc Testing
    1. Ad hoc testing is a commonly used term for software testing performed without planning and documentation.
    2. The tests are intended to be run only once, unless a defect is discovered.
    3. Adhoc testing is the least formal test method.
    4. The strength of ad hoc testing is that important defects can be found quickly.
    5. A creative and informal way of testing the software
    6. Conducted without any understanding of the software before testing it
    7. Not conducted based on formal test plans or test cases
    8. Performed by Testers
    9. Performed on a Stable application or product.
    10. Behavior of the application when the same functionality is executed multiple times.
    11. Behavior of the application when the previous and next screen navigation is performed more than once.
    Approach that is most useful when there are:
    • Inadequate specifications.
    • Severe time pressure.
    • This is performed to verify the:
    • Stability of the application
    1. This type of testing is done without any formal Test Plan or Test Case creation.
    2. Testing the application randomly is called ad-hoc testing.
    3. The way in which end user use the application is totally different the way we do testing, as a test engineers we don’t want any user to find any defects, so we test application randomly.
    When to do the Ad-hoc testing
    • Don’t do ad-hoc testing soon after smoke testing.
    • After the formal testing if we have time and we have some creative idea than we can go for ad-hoc testing.
    Exploratory Testing
    • The plainest definition of exploratory testing is test design and test execution at the same time.
    • Exploratory tests are not defined in advance and carried out precisely according to plan
    • Understand the application write test cases and do testing. Whenever there will be no requirements we go for exploratory Testing.
    • It means without requirements we have to test the application.

    Difference between Functional and Non-Functional Testing

    Functional Testing: 
    1. Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.
    2. Functional testing is concerned only with the functional requirements of a system or subsystem and covers how well (if at all) the system executes its functions. These include any user commands, data manipulation, searches and business processes, user screens, and integration.
    3. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.
    Following testing should consider in functional testing types:
    • Unit Testing
    • Smoke testing / Sanity testing
    • Integration Testing (Top Down,Bottom up Testing)
    • Interface & Usability Testing
    • System Testing
    • Regression Testing
    • Pre User Acceptance Testing(Alpha & Beta)
    • User Acceptance Testing
    • White Box & Black Box Testing
    • Globalization & Localization-testing


    Non Functional Testing:
    1. The non Functional Testing is the type of testing done against the non functional requirements. Most of the criteria are not consider in functional testing so it is used to check the readiness of a system. Non-functional requirements tend to be those that reflect the quality of the product, particularly in the context of the suitability perspective of its users. It can be started after the completion of Functional Testing. The non functional tests can be effective by using testing tools.
    2. The testing of software attributes which are not related to any specific function or user action like performance, scalability, security or behavior of application under certain constraints.
    3. Non functional testing has a great influence on customer and user satisfaction with the product. Non functional testing should be expressed in a testable way, not like “the system should be fast” or “the system should be easy to operate” which is not testable.
    4. Basically in the non functional test is used to major non-functional attributes of software systems. Let’s take non functional requirements examples; in how much time does the software will take to complete a task? Or how fast the response is.


    Following testing should consider in non functional testing types:
    • Availability Testing
    • Baseline testing
    • Compatibility testing
    • Compliance testing
    • Configuration Testing
    • Documentation testing
    • Endurance testing
    • Ergonomics Testing
    • Interoperability Testing
    • Installation Testing
    • Load testing
    • Localization testing and Internationalization testing
    • Maintainability Testing
    • Operational Readiness Testing
    • Performance testing
    • Recovery testing
    • Reliability Testing
    • Resilience testing
    • Security testing
    • Scalability testing
    • Stress testing
    • Usability testing
    • Volume testing


    Performance Testing(Load/Stress/Volume/Soak Testing)


    Performance Testing
    Testing the stability and response time of an application over a period of time by applying the load of data is called Performance Testing.
    Here load of data may be: Number of users or it may be volume of data

    Why Do Performance Testing?
    At the highest level, performance testing is almost always conducted to address one or more risks related to expense, opportunity costs, continuity, and/or corporate reputation. Some more specific reasons for conducting performance testing include:
    • 1.      Assessing release readiness by:
    1. Enabling you to predict or estimate the performance characteristics of an application in production and evaluate whether or not to address performance concerns based on those predictions. These predictions are also valuable to the stakeholders who make decisions about whether an application is ready for release or capable of handling future growth, or whether it requires a performance improvement/hardware upgrade prior to release.
    2. Providing data indicating the likelihood of user dissatisfaction with the performance characteristics of the system.
    3. Providing data to aid in the prediction of revenue losses or damaged brand credibility due to scalability or stability issues, or due to users being dissatisfied with application response time.
    • 2.      Assessing infrastructure adequacy by:
    1. Evaluating the adequacy of current capacity.
    2. Determining the acceptability of stability.
    3. Determining the capacity of the application’s infrastructure, as well as determining the future resources required to deliver acceptable application performance.
    4. Comparing different system configurations to determine which works best for both the application and the business.
    5. Verifying that the application exhibits the desired performance characteristics, within budgeted resource utilization constraints.
    • 3.      Assessing adequacy of developed software performance by:
    1. Determining the application’s desired performance characteristics before and after changes to the software.
    2. Providing comparisons between the application’s current and desired performance characteristics.
    • 4.      Improving the efficiency of performance tuning by:
    1. Analyzing the behavior of the application at various load levels.
    2. Identifying bottlenecks in the application.
    3. Providing information related to the speed, scalability, and stability of a product prior to production release, thus enabling you to make informed decisions about whether and when to tune the system.

    Load Testing
    • It is the simplest form of performance testing. A load test is usually conducted to understand the behavior of the system under a specific expected load. This load can be the expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important business critical transactions. If the database, application server, etc. are also monitored, then this simple test can itself point towards bottlenecks in the application software.

    Stress Testing
    • It is normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system's robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum.

    Spike Testing
    • Spike testing is done by suddenly increasing the number of or load generated by users - by a very large amount - and observing the behavior of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.

    Configuration Testing
    • Rather than testing for performance from the perspective of load, tests are created to determine the effects of configuration changes to the system's components on the system's performance and behavior. A common example would be experimenting with different methods of load balancing.

    Soak Testing
    • Soak Testing is a type of performance test that verifies a system's stability and performance characteristics over an extended period of time.  It is typical in this type of performance test to maintain a certain level of user concurrency for an extended period of time.  This type of test can identify issues relating to memory allocation, log file handles, and database resource utilisation.
    • Typically, issues are identified in shorter, targeted performance tests.  Soak Testing provides a measure of a system's stability over an extended period of time.

    Volume Testing
    • This is a term used to describe load testing and performance testing or stress testing.  It is the process of placing concurrent user and or system generated loads onto a system under test.
    • The goals of volume testing are to determine;
    • The volume or load at which systems stability degrades.
    • To identify and then tune issues preventing a system from reaching required service level agreements (SLA) or volumetric targets.
    • A typical performance test or load testing cycle combines both volume testing and performance tuning to ensure a system reaches its required benchmarks.

        Usability Testing

          1.This testing is also called as Testing for User-Friendliness.
          2.It is used to check whether the application is friendly or not. Like
    • All the features should be easy understandable.
    • The language used should be simple and understandable by a common person.
    • Most widely used features should be bold and they should be easily accesses.
          3.In usability testing basically the testers tests the ease with which the user interfaces can be used.
             It tests that whether the application or the product built is user-friendly or not.
          4.Usability Testing is a black box testing technique.
          5.Usability testing also reveals whether users feel comfortable with your application or
             Web site according to different parameters - the flow, navigation and layout, speed and
              content - especially in comparison to prior or similar applications.
          6.Usability Testing tests the following features of the software.
    • How easy it is to use the software.
    • How easy it is to learn the software.
    • How convenient is the software to end user.

     Usability testing includes the following five components:

    Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
    Efficiency: How fast can experienced users accomplish tasks?
    Memorability: When users return to the design after a period of not using it, does the user remember enough to use it effectively the next time, or does the user have to start over again learning everything?
    Errors: How many errors do users make, how severe are these errors and how easily can they recover from the errors?
    Satisfaction: How much does the user like using the system?

    Benefits of usability testing to the end user or the customer:
    1. Better quality software
    2. Software is easier to use
    3. Software is more readily accepted by users
    4. Shortens the learning curve for new users
    Advantages of usability testing:
    • Usability test can be modified to cover many other types of testing such as functional testing, system integration testing, unit testing, smoke testing etc.
    • Usability testing can be very economical if planned properly, yet highly effective and beneficial.
    • If proper resources (experienced and creative testers) are used, usability test can help in fixing all the problems that user may face even before the system is finally released to the user. This may result in better performance and a standard system.
    • Usability testing can help in discovering potential bugs and potholes in the system which generally are not visible to developers and even escape the other type of testing.
    Usability testing is a very wide area of testing and it needs fairly high level of understanding of this field along with creative mind. People involved in the usability testing are required to possess skills like patience, ability to listen to the suggestions, openness to welcome any idea, and the most important of them all is that they should have good observation skills to spot and fix the issues or problems.

    Accessibility Testing

    Accessibility testing is a subset of usability testing where in the users under consideration people with all abilities and disabilities are. The significance of this testing is to verify both usability and accessibility.
    Accessibility aims to cater people of different abilities such as:
    • Visual Impairments
    • Physical Impairment
    • Hearing Impairment
    • Cognitive Impairment
    • Learning Impairment
    A good web application should cater to all sets of people and NOT just limited to disabled people. These include:
    • Users with poor communications infrastructure
    • Older people and new users, who are often computer illiterate,
    • Users using old system (NOT capable of running the latest software)
    • Users, who are using NON-Standard Equipment
    • Users, who are having restricted access
    For accessibility testing to succeed, test team should plan a separate cycle for accessibility testing. Management should make sure that test team have information on what to test and all the tools that they need to test accessibility are available to them.

    Typical test cases for accessibility might look similar to the following examples -
    • Make sure that all functions are available via keyboard only (do not use mouse)
    • Make sure that information is visible when display setting is changed to High Contrast modes.
    • Make sure that screen reading tools can read all the text available and every picture/Image have corresponding alternate text associated with it.
    • Make sure that product defined keyboard actions do not affect accessibility keyboard shortcuts.

    Compatibility Testing
    1. Today, the Internet and the ubiquitous browser have become a way of life. A variety of devices from traditional desktops, laptops, tablets and smart phones form part of the Internet access landscape.
    2. Today the challenge lies in testing applications in an intelligent and optimal manner ensuring that they comply with the functionality, appearance, navigation and performance needs.
    3. ApTest is expert at testing products for compatibility with hardware and software environments.
    4. We can compatibility test your web site, CD, or application quickly and inexpensively.
    5. ApTest operates testing labs offering all the hardware and software needed for such testing including:
    • PCs, MACs, and UNIX workstations and servers
    • Old and new version of Windows 98/ME/2000/NT/XP/Vista/7, MacOS X, Linux, and many varieties of UNIX
    • Old and new browsers including Netscape / Firefox, Internet Explorer, and Safari
    • A variety of other applications software

    Globalization Testing

    Globalization is the term used to describe the process of producing software that can be run independent of its geographical and cultural environment. Localization is the term used to describe the process of customizing the globalized software for a specific environment. For simplicity, the term “globalization” will be used to describe both concepts, for in the broadest sense of the term, software is not truly globalized unless it is localized as well.
    There are lot aspects that must be considered when producing globalized software. Some of the aspects are as follows:
    • Sensitivity to the English vocabulary
    • Date and time formatting
    • Currency handling
    • Paper sizes for printing
    • Address and telephone number formatting
    Advantages of Globalization Testing:
    • It reduces overall testing and support costs
    • It helps us to reduce time for testing which result faster time-to-market
    • It is more flexible and product is easily scalable


    What is Internationalization (I18N) Testing?
    Internationalization testing is the process of verifying the application under test to work uniformly across multiple regions and cultures.
    The main purpose of internationalization is to check if the code can handle all international support without breaking functionality that might cause data loss or data integrity issues. Globalization testing verifies if there is proper functionality of the product with any of the locale settings.

    Internationalization Checklists:
    • Testing to check if the product works across settings.
    • Verifying the installation using various settings.
    • Verify if the product works across language settings and currency settings

    What is Localization (L10N) Testing?
    Localization testing is performed to verify the quality of a product's localization for a particular target culture/locale and is executed only on the localized version of the product.

    Localization Testing - Characteristics:
    • Modules affected by localization, such as UI and content
    • Modules specific to Culture/locale-specific, language-specific, and region-specific
    • Critical Business Scenarios Testing
    • Installation and upgrading tests run in the localized environment
    • Plan application and hardware compatibility tests according to the product's target region.

    Localization Testing - UI Testing:
    • Check for linguistic errors and resource attributes
    • Typographical errors
    • Verify the systems adherence to the input, and display environment standards
    • Usability testing of the User interface
    • Verify cultural appropriateness of UI such as color, design, etc.



    What is Reliability Testing?

    A system's reliability is a measure of stability and overall performance of a system collated during an extended period of time under various specific sets of test conditions.  This type of testing incorporates the results from non-functional testing such as stress testing, security testing, network testing, along with functional testing.  It is a combined metric to define a system's overall reliability.  A measure of reliability should be defined by business requirements in the form of service levels.  These requirements should then be used to measure test results and the overall reliability metric of a system under test.

    Software reliability is measured in terms of Mean Time between Failures (MTBF).

    MTBF consists of Mean Time to Failure (MTTF) and Mean Time to Repair (MTTR). MTTF is the difference of time between two consecutive failures and MTTR is the time required to fix the failure. Reliability for good software is a number between 0 and 1. Reliability increases when errors or bugs from the program are removed.
    For example, if MTBF = 1000 hours for average software, then the software should work for 1000 hours for continuous operations.

    Parameters involved in Reliability Testing:
    • Dependent elements of reliability Testing:
    • Probability of failure-free operation
    • Length of time of failure-free operation
    • The environment in which it is executed


    Key Parameters that are measured as part of reliability are given below:
    • MTTF: Mean Time to Failure
    • MTTR: Mean Time to Repair
    • MTBF: Mean Time between Failures (= MTTF + MTTR)


    Software reliability is the probability that software will work properly in a specified environment and for a given amount of time. Using the following formula, the probability of failure is calculated by testing a sample of all available input states.

    Probability = Number of failing cases / Total number of cases under consideration



    What is Security Testing?
    Security Testing is a type of software testing that intends to uncover vulnerabilities of the system and determine that its data and resources are protected from possible intruders.
    A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.
    Integrity:
    A measure intended to allow the receiver to determine that the information provided by a system is correct.
    Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding information to a communication, to form the basis of an algorithmic check, rather than the encoding all of the communication.

    There are four main focus areas to be considered in security testing (Especially for web sites/applications):
    • Network security: This involves looking for vulnerabilities in the network infrastructure (resources and policies).
    • System software security: This involves assessing weaknesses in the various software (operating system, database system, and other software) the application depends on.
    • Client-side application security: This deals with ensuring that the client (browser or any such tool) cannot be manipulated.
    • Server-side application security: This involves making sure that the server code and its technologies are robust enough to fend off any intrusion.

    EXAMPLE OF A BASIC SECURITY TEST
    • This is an example of a very basic security test which anyone can perform on a web site/application:
    • Log into the web application.
    • Log out of the web application.
    • Click the BACK button of the browser (Check if you are asked to log in again or if you are provided the logged-in application.)
    • Most types of security testing involve complex steps and out-of-the-box thinking but, sometimes, it is simple tests like the one above that help expose the most severe security risks.

    BUILDING TRUST

        There is an infinite number of ways to break an application. And, security testing, by itself, is not the only (or the best) measure of how secure an application is. But, it is highly recommended that security testing is included as part of the standard software development process. After all, the world is teeming with hackers/pranksters and everyone wishes to be able to trust the system/software one produces or uses.



    What is Penetration Testing?

    It is the method of testing where the areas of weakness in software systems in terms of security are put to test to determine, if ‘weak-point’ is indeed one, that can be broken into or not.
    Performed for: Websites/Servers/Networks

    How is it performed?
    • Step #1. It starts with a list of Vulnerabilities/potential problem areas that would cause a security breach for the systems.
    • Step #2. If possible, this list of items has to be ranked in the order of priority/criticality
    • Step #3. Devise penetration tests that would work (attack your system) from both within the network and outside (externally) to determine if you can access data/network/server/website unauthorized.
    • Step #4. If the unauthorized access is possible, the system has to be corrected and the series of steps need to be re-run until the problem area is fixed.
    Who performs Pen-testing?
    • Testers/ Network specialists/ Security Consultants
    • Note: it is important to note that pen-testing is not the same as vulnerability testing. The intention of vulnerability testing is just to identify potential problems, whereas pen-testing is to attach those problems.
    • Good news is, you do not have to start the process by yourself – you have a number of tools already in the market.  Why tools, you ask?
    • Even though you design the test on what to attack and how, you can leverage a lot of tools that are available in the market to hit the problem areas and collect data quickly that enables effective security analysis of the system.
    • Before we look into the details of the tools, what they do, where can you get them, etc. , I would like to point out that the tools you use for pen-testing can be classified into two kinds – In simple words they are: scanners and attackers. This is because; by definition pen-testing is exploiting the weak spots. So there are some software/tools that will show you the weak spots, some that show and attack. Literally speaking, the ‘show-ers’ are not pen-testing tools but they are inevitable for its success.

    Top Penetration Testing Tools:
    • Metasploit 
    • Wireshark
    • w3af
    • CORE Impact
    • Back Track
    • Netsparker
    • Nessus
    • Burpsuite
    • Cain & Abel
    • Zed Attack Proxy (ZAP)
    • Etc..




    No comments:

    Post a Comment

    Related Posts Plugin for WordPress, Blogger...