I was recently asked to write a little something about my thoughts on doing QA and some questions about my experience. I was told to only write two sentences but I made three so that I could delete a paragraph later. I thought it would be ok to post them on my site. I hope that you get something out of reading it.


I'm a little bit of a perfectionist when it comes to my work. I find myself agonizing over details and often have multiple drafts and versions of emails, code examples and tutorials that I'm going to publish right after I change that one or two critical things that will make my point. This has been one of the reasons that I've been in Quality Assurance for as long as I have; I don't want people to run into problems.

When I first started working, I was put in charge of creating and automating test suites for an application development tool. The mission was to simulate how a user would create a smaller application using the different components and options available. I quickly learned that automation is only good if you have a clear goal of what you want to accomplish. When the automation breaks, something should happen other than changing the automation. Since then, I've worked with a variety of systems that reported an issue when something goes wrong, either a problem in the build system or a change in functionality. 

A lot of companies are attempting to be agile and it is important that a QA engineer keep up to date with where the company wants to go with the product and have an idea for how people are actually going to be using it. I've been in a couple of situations where companies have been so focused on the features of the product, that they loose track of how everything is supposed to come together for the user. There can be difficulty in communicating (development in remote parts of the world or just don't see it the same way), but I feel that I've learned a lot of different ways of getting my point across (detailed bug reports, screenshots and videos) and understanding where other people are coming from.

In your current role, what is the end product? How do you test it?

In my current role, I test two products. One is a web portal that I test using selenium for regressions and manual testing following test plans that I've developed. The other is an IDE based off of eclipse which creates smaller applications that I do manual testing using test plans that I've created.

Do you have experience with imbedded software? Please describe?

No, the closest I've come to a truly imbedded system is when I was working at Intel and I had to run data through a simulation of the graphics chip.

What is your experience with programming?  Languages? Years of experience? What have you programmed for?

I've always been active in keeping my skills up. I've been doing java for about 8 years now and use it mainly for creating examples for customers and components within my test plans. I've used objective-c for 4 years and use it for my own personal applications. I've been getting more and more into python for the last 2 years as a way to automate my mac, generate test data and small web services. 

Have you been a member of a SCRUM team? Describe your sprints

Yes, most of them are related to creating and merging a major feature into the software. The team had to design and implement new features that could make the software unstable, I was in charge of merge conflicts and regression testing.

What is your experience with defect tracking systems. Where did you do this and what was the project?

I can't imagine a world where there isn't some kind of defect tracking system! I've used RedMine, Jira, Salesforce and a couple built by the the company (Intel and Apple). At Unify, we used Salesforce to track customer issues and information. If there was a real bug or enhancement, we'd put it into Jira or a terminal based bug tracker and each person was told to make sure that all the information needed for a bug was completely documented and were to place test artifacts. At Intel and Apple, we used a custom built solution that required us to put the examples and test artifacts on a shared drive. At Intel, we also had to schedule time on simulator in order to reproduce and debug problems. At Starview, we use RedMine to document and prioritize issues as they come up.

Describe your experience with source control systems

I've used CVS, RCS, Subversion, git and mercurial. At intel, we used subversion to branch and merge bug features and bug fixes. It was my primary duty to handle these merges. The most difficult merge had to do with over 25 files and thousands of lines of code. At Starview, they use mercurial and I use the command line to interact with that. I use git to check out source control from Github. 

Describe your experience with automated testing tools. What have you used? Where? For how long? In what context.

The first tool that used was SilkTest as Unify, I used it for 2 years. At Intel, they had a system called Tambor that ran tests in an emulator (not to be confused with the simulator from earlier) part of the regression testing was running Tambor. Currently, I've been using Jenkins to start automated tasks when there is a change in source code. Jenkins kicks off tasks such as selenium to test how the application respond to user input.

regression testing

Usually, we add a test to the automated suite based off the the severity of the bug and all critical issues are added to the test plan for manual testing. Usually, this is broken down as use cases and there is a lot of over lap. For example, a new feature might be introduced (i.e when users click and hold a button a different event is triggered) and we have to make sure that the new functionality doesn't change how it used to work unexpectedly. I've had to write the plans that describe how it is supposed to be used and the code that would actually do it.

acceptance testing

At my current position, we noticed a problem where things started to feel slower when using the application. We didn't have any real data on how to see if there was a problem other than noticing that our automation suite was taking longer to run. I took it upon myself to learn and implement jmeter and look at serval plugins for jenkins so that we can look at the performance of different test from one build to the other.

risk based testing?

I usually see this as part of regression testing because of time and resource limitations. If you are able to grow your suite of tests, you can focus on making sure that new features and bug fixes get higher priority for test creation. That being said, the new stuff is usually tested manually so that I can get a feel for what it is doing versus the documented requirements.  

Category: personal
QA employment