Howling at the moon

  I found coool space and the moon image in Seattle.

Howling at the moon

I woke up around 2:45am this morning for some unknown reason and remembered the weatherman talking about a lunar eclipse happening tonight. I tried to grab a couple of photos of it, but it was really a stretch for my equipment.  While standing outside in my sleep shorts and t-shirt, it struck me that it was relatively warm for the end of August. Although the thermometer read 55F degrees, it was still quite comfortable.  For the twenty minutes or so I was outside the coyotes were howling up a storm, I couldn’t count how many I heard, but it had to be anywhere from 5-10 hounds filling the still night air with their Halloween like song. Kind of reminded me of an old Sherlock Holmes tale, the Hound of the Baskervilles.

Advertisements

Thread, Thread, Thread… Execute a Method Asynchronously.

Problem: You need to start execution of a method and continue with other tasks while the method runs on a separaete thread. After the mehtod completes, you need to retrive the method’s return value.

Solution: Declare a delegate with the same signature as the method you want to execute. Create an instance of the delegate that references the method. Call the BeginInvoke method of the delegate instance to start executing your method. Use the EndInvoke method to determine the method’s status as well as obtain the method’s return value if complete.

Sample Code:

The following code demonstrates how to use the asynchronus execution pattern. It uses a delegate named AsyncExampleDelegate to execute a method using a configurable delay (produced using Thread.Sleep ). The example contains the following five methods that demonstrate the various approaches to handling asynchronous method completion:

  • BlockingExample: This method excutes LongRunningMethod asynchronously and continues with a limited set of processing. Once this processing is complete, BlockingExample blocks until LongRunningMethod complete. To block, BlockingExample calls the EndInvoke method of the AsyncExampleDelegate delegate instance. If LongRunningMethod has already finished, EndInvoke returns immediately; otherwise, BlockingExample blocks until LongRunningMethod completes.
  • PollingExample: This method executes LongRunningMethod asynchronously and then enters a polling loop until LongRunningMethod completes. PollingExample test the IsCompleted property of the IAsyncResult instance retuernd by BeginInvoke to determine whther LongRunningMethod is complete; otherwise, PollingExample calls Thread.Sleep.

using System;
using System.Text;
using System.Threading;
using System.Collections;

namespace ConsoleApplication6
{
    class Program
    {

        private static void TraceMsg(DateTime time, string msg)
        {
            Console.WriteLine("[{0,3}/{1}] – {2} : {3}",
                Thread.CurrentThread.ManagedThreadId,
                Thread.CurrentThread.IsThreadPoolThread ? "pool" : "fore",
                time.ToString("HH:mm:ss.ffff"), msg);
        }

        public delegate DateTime AsyncExampleDelegate(int delay, string name);

        public static DateTime LongRunningMethod(int delay, string name)
        {
            TraceMsg(DateTime.Now, name + " example – tread starting.");

            Thread.Sleep(delay);

            TraceMsg(DateTime.Now, name + " example – thread stopping.");

            return DateTime.Now;
        }

        public static void BlockingExample()
        {
            Console.WriteLine(Environment.NewLine + "*** Running Blocking Example ***");

            AsyncExampleDelegate longRunningMethod = LongRunningMethod;

            IAsyncResult asyncResult = longRunningMethod.BeginInvoke(2000, "Blocking", null, null);

            for (int count = 0; count < 3; count++)
            {
                TraceMsg(DateTime.Now, "Continue processing until ready to block…");

                Thread.Sleep(200);
            }

            TraceMsg(DateTime.Now, "Blocking until method is complete…");

            DateTime completion = DateTime.MinValue;

            try
            {
                completion = longRunningMethod.EndInvoke(asyncResult);
            }
            catch
            {

            }

            TraceMsg(completion, "Blocking example complete…");
        }

        public static void PollingExample()
        {
            Console.WriteLine(Environment.NewLine + "*** Running Polling Example ***");

            AsyncExampleDelegate longRunningMethod = LongRunningMethod;

            IAsyncResult asyncResult = longRunningMethod.BeginInvoke(2000, "Polling", null, null);

            TraceMsg(DateTime.Now, "Poll Repeatedly until method is complete..");

            while (!asyncResult.IsCompleted)
            {
                TraceMsg(DateTime.Now, "Polling ….");

                Thread.Sleep(300);
            }

            DateTime completion = DateTime.MinValue;

            try
            {
                completion = longRunningMethod.EndInvoke(asyncResult);
            }
            catch
            {
            }

            TraceMsg(completion, "Polling example complete…");
        }

        public static void WaitingExample()
        {
            Console.WriteLine(Environment.NewLine + "*** Running Waiting Example ***");

            AsyncExampleDelegate longRunningMethod = LongRunningMethod;

            IAsyncResult asyncResult = longRunningMethod.BeginInvoke(2000, "Waiting", null, null);

            TraceMsg(DateTime.Now, "Waining until method is complete…");

            while (!asyncResult.AsyncWaitHandle.WaitOne(300, false))
            {
                TraceMsg(DateTime.Now, "Wait timeout….");
            }

            DateTime completion = DateTime.MinValue;

            try
            {
                completion = longRunningMethod.EndInvoke(asyncResult);
            }
            catch
            {
            }

            TraceMsg(completion, "Waiting example complete.");
        }

        public static void WaitAllExample()
        {
            Console.WriteLine(Environment.NewLine+ "*** Running WaitAll Example ***");

            ArrayList asyncResults = new ArrayList(3);

            AsyncExampleDelegate longRunningMethod = LongRunningMethod;

            asyncResults.Add(longRunningMethod.BeginInvoke(3000, "WaitAll 1", null, null));

            asyncResults.Add(longRunningMethod.BeginInvoke(2500, "WaitAll 2", null, null));

            asyncResults.Add(longRunningMethod.BeginInvoke(1500, "WaitAll 3", null, null));

            WaitHandle[] waitHandles = new WaitHandle[3];

            for (int count = 0; count < 3; count++)
            {
                waitHandles[count] = ((IAsyncResult)asyncResults[count]).AsyncWaitHandle;
            }

            TraceMsg(DateTime.Now, "Waiting until all 3 method are complete…");

            while(!WaitHandle.WaitAll(waitHandles, 300, false))
            {
                TraceMsg(DateTime.Now, "WaitAll timeout…");
            }

            DateTime completion = DateTime.MinValue;

            foreach(IAsyncResult result in asyncResults)
            {
                try{
                    DateTime time = longRunningMethod.EndInvoke(result);
                    if(time > completion) completion = time;
                }
                catch{
                }
            }
            TraceMsg(completion, "WaitAll example complete.");
        }

        public static void CallbackExample()
        {
            Console.WriteLine(Environment.NewLine + "*** Running Callback Example ***");

            AsyncExampleDelegate longRunningMethod = LongRunningMethod;

            IAsyncResult asyncResult = longRunningMethod.BeginInvoke(2000,
                "Callback", CallbackHandler, longRunningMethod);

            for (int count = 0; count < 15; count++)
            {
                TraceMsg(DateTime.Now, "Continue processing…");
                Thread.Sleep(200);
            }
        }

        public static void CallbackHandler(IAsyncResult result)
        {
            AsyncExampleDelegate longRunningMethod =
                (AsyncExampleDelegate)result.AsyncState;

            DateTime completion = DateTime.MinValue;

            try
            {
                completion = longRunningMethod.EndInvoke(result);
            }
            catch
            {
            }

            TraceMsg(completion, "Callback example complete.");
        }

        public static void Main()
        {
            BlockingExample();
            PollingExample();
            WaitingExample();
            WaitAllExample();
            CallbackExample();

            Console.WriteLine(Environment.NewLine);
            Console.WriteLine("Main method complete. Press Enter.");
            Console.ReadLine();
        }

    }
}

Testing question (No.3)

Let me show the experience of my software testing!!

Technorati のタグ: ,

Q: What is system testing?
A: System testing is black box testing, performed by the Test Team, and at the start of the system testing the complete system is configured in a controlled environment.
The purpose of system testing is to validate an application’s accuracy and completeness in performing the functions as designed.
System testing simulates real life scenarios that occur in a "simulated real life" test environment and test all functions of the system that are required in real life.
System testing is deemed complete when actual results and expected results are either in line or differences are explainable or acceptable, based on client input.

Q: What is integration testing?
A: Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements. Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable / acceptable, based on client input.

Q: What is stochastic testing?
A: Stochastic testing is the same as "monkey testing", but stochastic testing is a more technical sounding name for the same testing process.
Stochastic testing is black box testing, random testing, performed by automated testing tools. Stochastic testing is a series of random tests over time.
The software under test typically passes the individual tests, but our goal is to see if it can pass a large series of the individual tests.

Q: What is regression testing?
A: The objective of regression testing is to ensure the software remains intact. A baseline set of data and scripts is maintained and executed to verify changes introduced during the release have not "undone" any previous code. Expected results from the baseline are compared to results of the software under test. All discrepancies are highlighted and accounted for, before testing proceeds to the next level.

Q: What is mutation testing?
A: In mutation testing, we create mutant software, we make mutant software to fail, and thus demonstrate the adequacy of our test case.
When we create a set of mutant software, each mutant software differs from the original software by one mutation, i.e. one single syntax change made to one of its program statements, i.e. each mutant software contains only one single fault.
When we apply test cases to the original software and to the mutant software, we evaluate if our test case is adequate.
Our test case is inadequate, if both the original software and all mutant software generate the same output.
Our test case is adequate, if our test case detects faults, or, if, at least one mutant software generates a different output than does the original software for our test case.

Q: How do test case templates look like?

A: Software test cases are documents that describe inputs, actions, or events and their expected results, in order to determine if all features of an application are working correctly.
A software test case template is, for example, a 6-column table, where column 1 is the "Test case ID number", column 2 is the "Test case name", column 3 is the "Test objective", column 4 is the "Test conditions/setup", column 5 is the "Input data requirements/steps", and column 6 is the "Expected results".
All documents should be written to a certain standard and template. Standards and templates maintain document uniformity. It also helps in learning where information is located, making it easier for a user to find what they want. Lastly, with standards and templates, information will not be accidentally omitted from a document.

Q: What is the difference between system testing and integration testing?
A: System testing is high level testing, and integration testing is a lower level testing. Integration testing is completed first, not the system testing. In other words, upon completion of integration testing, system testing is started, and not vice versa.
For integration testing, test cases are developed with the express purpose of exercising the interfaces between the components.
For system testing, on the other hand, the complete system is configured in a controlled environment, and test cases are developed to simulate real life scenarios that occur in a simulated real life test environment.
The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.
The purpose of system testing, on the other hand, is to validate an application’s accuracy and completeness in performing the functions as designed, and to test all functions of the system that are required in real life.

Q: How do you perform integration testing?
A: First, unit testing has to be completed. Upon completion of unit testing, integration testing begins. Integration testing is black box testing. The purpose of integration testing is to ensure distinct components of the application still work in accordance to customer requirements.
Test cases are developed with the express purpose of exercising the interfaces between the components. This activity is carried out by the test team.
Integration testing is considered complete, when actual results and expected results are either in line or differences are explainable/acceptable based on client input.


Q: What is monkey testing?
A: "Monkey testing" is random testing performed by automated testing tools. These automated testing tools are considered "monkeys", if they work at random.
We call them "monkeys" because it is widely believed, if we allow six monkeys to pound on six typewriters at random, for a million years, they will recreate all the works of Isaac Asimov.
There are "smart monkeys" and "dumb monkeys".
"Smart monkeys" are valuable for load and stress testing, and will find a significant number of bugs, but they’re also very expensive to develop.
"Dumb monkeys", on the other hand, are inexpensive to develop, are able to do some basic testing, but they will find few bugs. However, the bugs "dumb monkeys" do find will be hangs and crashes, i.e. the bugs you least want to have in your software product.
"Monkey testing" can be valuable, but they should not be your only testing.

Q: What is smoke testing?
A: Smoke testing is a relatively simple check to see whether the product "smokes" when it runs. Smoke testing is also known as ad hoc testing, i.e. testing without a formal test plan.
With many projects, smoke testing is carried out in addition to formal testing. If smoke testing is carried out by a skilled tester, it can often find problems that are not caught during regular testing.
Sometimes, if testing occurs very early or very late in the software development cycle, this can be the only kind of testing that can be performed.
Smoke tests are, by definition, not exhaustive, but, over time, you can increase your coverage of smoke testing.
A common practice at Microsoft, and some other software companies, is the daily build and smoke test process. This means, every file is compiled, linked, and combined into an executable file every single day, and then the software is smoke tested.
Smoke testing minimizes integration risk, reduces the risk of low quality, supports easier defect diagnosis, and improves morale.
Smoke testing does not have to be exhaustive, but should expose any major problems. Smoke testing should be thorough enough that, if it passes, the tester can assume the product is stable enough to be tested more thoroughly.
Without smoke testing, the daily build is just a time wasting exercise. Smoke testing is the sentry that guards against any errors in development and future problems during integration.
At first, smoke testing might be the testing of something that is easy to test. Then, as the system grows, smoke testing should expand and grow, from a few seconds to 30 minutes or more.

Q: What is structural testing?
A: Structural testing is also known as clear box testing, glass box testing. Structural testing is a way to test software with knowledge of the internal workings of the code being tested.
Structural testing is white box testing, not black box testing, since black boxes are considered opaque and do not permit visibility into the code.

Q: What is grey box testing?
A: Grey box testing is a software testing technique that uses a combination of black box testing and white box testing. Gray box testing is not black box testing, because the tester does know some of the internal workings of the software under test.
In grey box testing, the tester applies a limited number of test cases to the internal workings of the software under test. In the remaining part of the grey box testing, one takes a black box approach in applying inputs to the software under test and observing the outputs.
Gray box testing is a powerful idea. The concept is simple; if one knows something about how the product works on the inside, one can test it better, even from the outside.
Grey box testing is not to be confused with white box testing; i.e. a testing approach that attempts to cover the internals of the product in detail. Grey box testing is a test strategy based partly on internals.
The testing approach is known as gray box testing, when one does have some knowledge, but not the full knowledge of the internals of the product one is testing.
In gray box testing, just as in black box testing, you test from the outside of a product, just as you do with black box, but you make better-informed testing choices because you’re better informed; because you know how the underlying software components operate and interact.

Q: When do you choose automated testing?
A: For larger projects, or ongoing long-term projects, automated testing can be valuable. But for small projects, the time needed to learn and implement the automated testing tools is usually not worthwhile.
Automated testing tools sometimes do not make testing easier. One problem with automated testing tools is that if there are continual changes to the product being tested, the recordings have to be changed so often, that it becomes a very time-consuming task to continuously update the scripts.
Another problem with such tools is the interpretation of the results (screens, data, logs, etc.) that can be a time-consuming task.

Q: What’s the difference between priority and severity?
A: The simple answer is, "Priority is about scheduling, and severity is about standards."
The complex answer is, "Priority means something is afforded or deserves prior attention; a precedence established by order of importance (or urgency). Severity is the state or quality of being severe; severe implies adherence to rigorous standards or high principles and often suggests harshness; severe is marked by or requires strict adherence to rigorous standards or high principles, e.g. a severe code of behavior."

Q: What’s the difference between efficient and effective?
A: "Efficient" means having a high ratio of output to input; working or producing with a minimum of waste. For example, "An efficient test engineer wastes no time", or "An efficient engine saves gas".
"Effective", on the other hand, means producing, or capable of producing, an intended result, or having a striking effect. For example, "For automated testing, WinRunner is more effective than an oscilloscope", or "For rapid long-distance transportation, the jet engine is more effective than a witch’s broomstick".

Execute a Method using the Tread Pool.

Problem: You need to execute a task using a thread from the runtimes thread pool.

Solution: Declare a method containing the code you want to execute. The method’s signature must match that defined by the System.Threading.WaitCallback delegate; that is, it must return void and take a single object argument. Call the static method QueueUserWorkItem of the System.Threading.TreadPool Class, passing it your method name. The runtime will queue your mehtod and execute it when a thread-pool thread becomes available.

How it works: Applications that use many short-lived threads or maintain large numbers of concurrent threads can suffer perfomrmance degradation becaouse of the ovberhead associated with the creation, operation, and destruction of threads. In addition, it is common in multithreaded systems for threads to sit idle a large portion of the time while they wait for the appropriate conditions to trigger their execution. Using a thread pool provides a cmmon solution to improve the scalability, efficiency, and performance of multithreaded systems.

The .NET framework provides a simple thread-pool implentation accessible through the members of the ThreadPool static class. The QueueUserWorkItem method allows you to execute a method using a thread-pool thread by placing a work item on a queue. As a thread from the thread pool becomes available, it takes the next work item from the queue and execute it. The thread performs the work assigned to it, and when it is finished, instead of terminating, the thread returns to the thread pool and takes the next work item from the work queue.

using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;

namespace ConsoleApplication5

Technorati のタグ: ,

{
    class Program
    {

        private class MessageInfo
        {
            private int iterations;
            private string message;

            public MessageInfo(int iterations, string message)
            {
                this.iterations = iterations;
                this.message = message;
            }

            public int Iterations
            {
                get
                {
                    return iterations;
                }
            }

            public string Message
            {
                get
                {
                    return message;
                }
            }
        }
        public static void DisplayMessage(object state)
        {
            MessageInfo config = state as MessageInfo;

            if (config == null)
            {
                for (int count = 0; count < 3; count++)
                {
                    Console.WriteLine("A thread pool example.");

                    Thread.Sleep(1000);
                }
            }
            else
            {
                for (int count = 0; count < config.Iterations; count++)
                {
                    Console.WriteLine(config.Message);

                    Thread.Sleep(1000);
                }
            }
        }

        public static void Main()
        {
            ThreadPool.QueueUserWorkItem(DisplayMessage);

            MessageInfo info = new MessageInfo(5, "A thread pool example with arguments.");

            ThreadPool.QueueUserWorkItem(DisplayMessage, info);

            Console.WriteLine("Main method complete. Press Enter.");
            Console.ReadLine();

        }
    }
}