This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
These are all by the same author. — Edward Z. Yang(Talk) 23:07, 12 October 2006 (UTC)
As the author, I don't see the problem in that. these are quality links, to different sites. Why did you remove them? Roy Osherove.
The first two paragraphs of the section, "Techniques" don't run together. The IEEE standard for Software Unit Testing, albeit dated, isn't foreign to what occurs today for unit testing (a practice that is most decidedly automated). Perhaps the text regarding the standard should be removed altogether? .digamma 00:08, 28 October 2006 (UTC)
In the lead we currently have the following clause: "constructs such as mock objects can assist in separating unit tests". Could someone clarify what this means? Are we separating unit tests from each other, or from other 'modules' in the system. I guess this is something to do with being able to run unit tests on modules in isolation from other possibly buggy modules, but this isn't clear (at least to me) from the way it is currently worded. Stumps 15:18, 24 November 2006 (UTC)
The section on facilitating change touches on this obliquely, but I think that code hygiene is a separate goal. Most people have worked on codebases that are either deteriorating or have already deteriorated, with the dreaded "don't touch that, nobody understands it" sections. —The preceding unsigned comment was added by 131.107.0.73 (talk) 18:38, 18 January 2007 (UTC).
If the IEEE prescribes neither an automated nor a manual approach, then why even mention it? My dog prescribes neither an automated nor a manual approach. —The preceding unsigned comment was added by Ronnystalker (talk • contribs) 09:16, 7 April 2007 (UTC).
I think that's a much better way of putting it. Perhaps the sentence "The IEEE[1] prescribes neither an automated nor a manual approach." should be changed to "The IEEE[1] does not favour one approach over the other." I'm a newbie so I'm not confident enough to go editing pages (especially code that has a little [1] in it, in a topic that i know little about). But, I do know that I stumbled over that sentence as a reader. E.g "prescribing neither something nor another" seems a little odd to me. Ronnystalker 06:03, 8 April 2007 (UTC)
I think the critique of all existing test frameworks as not being built in to the language is both a bit brutal and perhaps a marketing point for the D programming language, which was referenced without any evidence that it improved productivity.
The current content should be rolled back (or backed up with auditable claims), and/or the area broadened to look at other languages. SteveLoughran 10:48, 17 April 2007 (UTC)
The statement referring to the D programming language was inserted on 20:55, 20 July 2004 by 24.16.52.122, who also added similar irrelevant pro-D comments to a number of other Wiki pages. I suggest it be removed. jon 11:48, 14 May 2007 (UTC)
I can understand separating Unit Testing from specific languages, but "... which may be a[n] ... abstract class or ..." seems a little unusual since few languages allow instantiation of abstract classes. Or is this to imply that an abstract class would be tested through a derived class? How would you test an abstract class (generally no code) in an automated way? —The preceding unsigned comment was added by Mweddle (talk • contribs) 16:02, 16 May 2007 (UTC).
I think the point is that an abstract base class is something you might want a unit test for - abstract classes can contain code, however they can't be directly instantiated. To instantiate one for the purposes of testing, you would need to create an instance of a concrete derived class. If one isn't readily available, then you might define a dummy subclass purely for the purposes of testing. jon 16:11, 21 May 2007 (UTC)
Some, me among them, have begun to call XP's version of unit testing microtesting. I'm wondering if the text below is appropriate to add to the article under the Extreme Programming sub-head:
A move has been made in the XP world to re-christen the practice of unit testing as microtesting. The case is made that unit testing already has a rich pre-XP meaning that incorporates many un-extreme practices, and that a more precise term is needed. The movement has gained a very small amount of ground. A few hundred XP practitioners and one XP consultancy (Industrial Logic, Inc).
This is a fact, especially if you emphasize 'very small'. There are probably only a few hundred people who have adopted the term, and It is an attempt to be NPOV. The mentioned link self-proclaims its adoption.
No. In fact, I'm going to just leave this note right here and let someone else decide what to do with this data point. Cheers! GPa Hill 09:03, 22 September 2007 (UTC)
I added some cited content from Albert Savoia (who has joined forces with Kent Beck, the father of JUnit). I apologize for the repeated versions that I committed, but I struggled to get the note formatting to work. Apparently, the citation template doesn't like extraneous spaces. —Preceding unsigned comment added by MickeyWiki (talk • contribs) 17:47, 29 November 2007 (UTC)
Maybe I'm doing something wrong, but I don't find I need this much test code. I've been running about 1 line of test per line of production code for the last few years.
I don't test everything that is testable. I only make sure that the program passes a few key tests. Like if I wrote a program to convert between Centigrade and Fahrenheit, I would write 4 test cases for each conversion:
Of course, if you consider the only "real" line of code to be the conversion itself:
centigrade = (fahrenheit - 32) / 1.8
... then maybe you do need 3 to 5 lines of code. But I count every non-blank line, so this routine has 4 lines.
float toCentigrade(float fahrenheit) { float centigrade = (fahrenheit - 32) / 1.8; return centigrade; }
Golly, I hope this is not WP:Original research. :-) --Uncle Ed (talk) 18:32, 23 January 2008 (UTC)
Well, if that's some published author's view, then it should go into the article whether I agree or not - obviously! :-)
I'll dig through the textbooks and articles I learned refactoring and unit testing from. Maybe this will shed some light on the proportion question. But the focus was not on "testing lines of code" but on "making sure the program works".
The thing being tested is not code per se, but the functionality of the program. That is why refactoring can work so well, in conjunction with unit testing. We are never testing any particular line of code. Rather, we are testing whether the program gives the right response when you tell it to do something.
We might create an elaborate set of routines with dozens of lines of code, just to pass a one-line test. If that is what it takes. Just this morning, I created an entire new class in Java because of a failing test that was only 4 lines long. That's lopsided in the other direction! --Uncle Ed (talk) 02:53, 24 January 2008 (UTC)
Yeah, it sounds like you're talking about "code coverage" - which is the angle from which ABC, Inc. was exploring test software. I'm more interested in finding out whether the program does its job correctly. It's a subtle difference. --Uncle Ed (talk) 17:28, 25 January 2008 (UTC)
I believe the defintion of "unit" in the first paragraph is incorrect. In object oriented programming a unit should be the method, not the class. Not only is a method the smallest testable part but from a practical point of view, testing at that level reduces the tendency to do functional black box testing and encourages making sure each line of code does what it was intended to do. Sspiro 18:07, 12 September 2007 (UTC)
The definition of a unit test is inconsistent throughout this article; first it's a method, then a class, then a module. Furthermore, you are incorrect when you say that 'unit testing should encourage someone to make sure that each line of code does what is was intended to do'. This is completely opposite to the point of unit testing. A unit test should be a black box test of a unit, not a glass box test of each line of code in just one method; this would make the unit test completely useless as a tool in refactoring and regression testing and is a grave mistake that I have recently seen on a project.
Also wrong in this article is the claim that a unit test should not cross class boundaries. If this were true then writing a unit test would be impossible because pretty much every method uses some other class in some other way. It would be a better unit test that tested that the collaboration with other objects is also correct; writing mocks or stubs for each and every collaborator would be a complete waste of time; you'd have to mock String, StringBuffer, ArrayList etc ad-infinitum. All you end up testing is that the unit interacts with the mocks and stubs as expected, totally pointless. Bagaaz 11:21, 22 December 2008 (UTC)
One more error: the current page state that's a "verification and validation" method. Instead it is a Verification method (it verify the code against the specification). Validation in software engineering and computer programming means to check that the software fit the real need of the users and really work in the context and the environment where it have to operate (this also also things that for any reason are not captured by requirements or expressed in specifications). An unit test definitely is not a Validation method. —Preceding unsigned comment added by 213.89.239.157 (talk) 23:42, 3 January 2010 (UTC)
This was written By SangameswaraRao Udatha
Writing a test cases using a framework such as JUnit is common. One can also write programs that test other programs without using a framework. However unit test does not always mean writing test case programs. Unit tests can be manual too. (Matt Heusser - I edited the definition to include this possibility, eg stepping through code in a debugger.)
Unit test are tests done by programmers to make sure a low level code works as he intended.
" When basic, low-level code isn't reliable, the requisite fixes don't stay at the low level. You fix the low level problem, but that impacts code at higher levels, which then need fixing, and so on." - Andy Hunt and Dave Thomas
"Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended. Its implementation can vary from being very manual (pencil and paper) to being formalized as part of build automation."
Indeed: When writing my own code (in Lisp or PHP), I manually unit-test each line of code, before even writing the next line of code in most cases. (In fact, if a single line of code involves nested expressions, I often test sub-expressions within that line of code before testing the line-of-code as a whole, and sometimes even before finishing writing the rest of that one line of code.) Only when I am finished writing and unit-testing all the lines of code to complete a function definition, then I finally (again manually) unit-test the function as a whole. If I were to *also* do automated unit testing, that would work well only for functions as a whole, not for lines of code within a function definition.
Accordingly, it can be generally said that manual unit testing can be on units as small as a single line of code, or even sub-expressions within a single line of code, whereas automated unit testing is feasible only on units at the level of function/method or larger.
In that context, the following remark is a half-truth, i.e. a lie: "In procedural programming a unit may be an individual function or procedure." It would be better to qualify that to say that for *automated* unit testing a unit may be an individual function or procedure, while for *manual* unit testing a unit may be as small as a single line of code or even a sub-expression within a single line of code. (As an aside, via assertations, Common Lisp provides a "fail-soft" way to unit-test single lines of code and/or sub-expressions for some types of errors, such as wrong datatype: During program operation, if such an assertion fails, an error is signalled.)
Unfortunately in this Wiki article, the note about procedural programming comes before the remark that unit testing can be either manual or automated (I reversed the two remarks in my discussion above to make it more sensible i.e. reader-friendly), so I can't see any graceful way to make this edit to repair the halftruth/lie, so I leave some expert to perhaps re-organize the introductory material to make this all truthful and in an appropriate sequence to be reader-friendly. In particular I think it should be said right at the top that unit testing can be either manual or automated, to make that absolutely clear, and *then* during the rest of the article the qualifier "manual" or "automated" or "either manual or automated" can be added to each section where the distinction makes a difference. 198.144.192.45 (talk) 16:22, 8 March 2011 (UTC) Twitter.Com/CalRobert (Robert Maas)
What are the limits of unit testing? How can you unit test "graphical" output or GUI stuff? -- Hahih (talk) 09:57, 30 July 2008 (UTC)
I've edited the definition to make it clear that unit testing is an activity designed to build confidence that the unit is fit for use - not the verification of correctness. Given the number of possible inputs of even a trivial function (say, input two doubles, output a double) verification of correctness is impossible. To quote Glenford Meyers "The only exhaustive testing occurs when the tester is exhausted." —Preceding unsigned comment added by Mheusser (talk • contribs) 20:48, 3 March 2009 (UTC)
Two aspects of the limits of unit testing: 1. Trying to exhaustively test every possible combination of a set of inputs. This maybe is not the best use of unit testing. Model checking may be a better approach if you are trying to do something like that. 2. Limits of imposed by encapsulation. I generally think of unit testing like this: When you run a unit test, the functionality is being expressed on a different platform.
Therefore, the functionality to be unit tested must be encapsulated in a way so as to define it as an encapsulation of platform independent functionality. That imposes various constraints, but it also requires certain amount of discipline.
However, it is a good kind of discipline. A process imposes discipline in a rather arbitrary manner that either lacks flexibility or is ambiguous. Phillip Armour's second law of software process: "We can only define software processes at two levels: too vague or too confining." But if you have to create functionality that runs in an equivalent manner within two or more different contexts, the discipline comes about naturally because without it, the thing doesn't work.
Getting back to your example of a GUI, the GUI would probably have to be built following something like a Model-View-Controller (MVC) design pattern. In this way, the controller could be tested on its own without having anything to do with buttons, dialog boxes, menus, etc, etc, all the objects typically associated with GUI. The view element becomes a dumb face plate that sends and recieves events. It contains all the GUI objects of whatever platform you developing on. So testing the view becomes very simple and straightforward.
The Model portion becomes simpler too. It is simply the repository of data being collected or displayed. It is often more a matter of database design rather than reactive behavior. There isn't really much to test.
The unit tests are written mostly for the Controller portion of the system. (Entropy7 (talk) 18:20, 16 August 2009 (UTC))
A practical example of this: I worked for a company building a multiprocessor controller for refrigeration equipment. They had a central control unit (one of the processors) and a control console (another processor). There was an asynchronous com-link between the console and the central controller. It was sort of like RS-232 at TTL voltage levels (it was legacy, but I thought this is probably not very noise resistant) Com packets went back and forth between them with an 8 bit checksum verification. Anyway, the console was being completely redesigned with a spiffy new capacitance touch panel behind a plate of tempered glass. But it would be another 4 weeks before hardware would be ready and the customer wanted to start elaborating requirements NOW!! So I did a SWT Java GUI that looked just like the front of the console. All the different packet commands I converted to functions in a another file. The functions could either interface to the Java GUI or the com-link of the hardware, depending on which file you included. The development environment was Eclipse 3 for the Java. The code for the actual product was written in C. There was a bit of messiness in going back and forth between Java and C that I won't go into here. It was actually not that big of a deal because the functionality was mostly static. Had it been more dynamic in C++, things could have been a lot more ugly. However, the upshot is that they had a desktop version of their product 3 weeks before the hardware was even finished. A month later, code that was well tested and reviewed by all stakeholders was programmed into the new console and we had full verified functionality running in about 15 minutes. The JUnit framework was used with Eclipse to do unit testing. Unit tests were also cross-referenced against a requirement set using a plug-in called JFeature. (Entropy7 (talk) 22:06, 17 August 2009 (UTC))
Seriously? Do you unit test your variable declarations as well?
- Jacob, San Diego —Preceding unsigned comment added by 76.88.0.180 (talk) 02:38, 16 April 2009 (UTC)
That sounds like a contradiction. A test checks for an implementation, but abstract by definition does not yet have an implementation. If you do in fact have other concrete methods in an abstract class, then to test that class you will need to create a concrete instance of that class. To do so you stub the abstract methods to return known values. — Preceding unsigned comment added by Dvanatta (talk • contribs) 05:15, 9 December 2011 (UTC)
"Extreme Programming and most other methods use unit tests to perform white box testing."
Maybe I'm just splitting hairs here. But XP and TDD use unit tests more as black box. From an outside point of view, you figure how you want to use the code. Then, you write the code that makes it happen. White box implies looking at the code and determining test cases to test it. Maybe this is a gray area (pun partially intended.) DRogers 18:39, 14 September 2006 (UTC)
It has to be whitebox. If it's blackbox then you basically have to test every possible combination of inputs. For most modules this is not feasible unless you are prepared to wait until the end of the universe. 07:00, 22 February 2012 (UTC) — Preceding unsigned comment added by 203.41.222.1 (talk)
I'm somewhat disheartened to see my changes undone. Not because they were mine, but because I tried to emphasise the limits that are way too often ignored. The one comment I wish could be reinstated is the one that says that unit testing are only as good as the *API* under test, and that designing good API is hard, thus creating good unit tests is even harder. But who am I, right? Just a software practitioner with only 30 years of experience: I surely cannot match the wit and youthful ardor of a self proclaimed editor ... Oh my ... — Preceding unsigned comment added by Verec (talk • contribs) 14:32, 28 March 2012 (UTC)
Can Go_(programming_language) can be added to the list of "Language-level unit testing support"? It has a has a package in its standard library for it, just as Python does. — Preceding unsigned comment added by 174.1.81.63 (talk) 04:00, 25 March 2013 (UTC)
Section "Unit Testing Frameworks": "Unit testing without a framework is valuable in that there is a barrier to entry .. scant unit tests is hardly better than having none at all ...",
This 2nd paragraph sentence seems to contradict it'self. Is the author advocating FOR or AGAINST frameworks? And is this fact or opinion? The citation [10] "Bullseye Testing Technology" does not support this. That source does not discuss "framework", "barrier to entry", "valuable", "easy" or "scant" coverage. Clarification please? 69.248.214.71 (talk) 14:16, 22 May 2013 (UTC)
Should 'regression' point to the page on 'regression testing'? — Preceding unsigned comment added by Cbyneorne (talk • contribs) 15:17, 20 July 2006 (UTC)
At this moment the section title and paragraph contents don't correlate with the list of programming languages well.
Among the listed programming languages only D (http://dlang.org/unittest.html) and Cobra (http://cobra-language.com/docs/quality/) do have built-in language level support for unit testing. Other languages do have it through non-specific language constructs (like python through docstring does) or via annotations/other extensions (C# and Java).
So I think that either title and text should be reworded significantly, or the list of language should be reduced to 2: D and Cobra.
Correct me if I'm wrong. — Preceding unsigned comment added by Zerkms (talk • contribs) 01:20, 13 November 2013 (UTC)
The definitions given in the summary of this article includes testing within a single unit and between multiple units which conflicts with the definitions given by others:
Revision or verifiable supporting citations are needed. Stephen Charles Thompson (talk) 21:13, 17 October 2018 (UTC)
There is a 'Benefits' section, for balance the 'Costs' should also be noted. Such costs include :-
Given there is a cost to implementing unit tests, there is a risk of a negative return on investment if the test suite doesn't actually identify software defects in less time than it took to write and maintain the unit tests. If the development team are sufficiently adept, it is very rare that defects will exist in a 'unit' in isolation. It is more likely that defects will be introduced at integration or system level where module testing would be of greater value.
Not all software developers agree that unit testing is worthwhile. — Preceding unsigned comment added by Johnpwilson (talk • contribs) 15:50, 11 November 2013 (UTC)