Keeping Tests Valuable: Why Is Refactoring Resistance So Important for Tests?
The deeper we delve into testing and the principles to follow, techniques and best practices, the more the test suite will benefit the software and the development team. Creating a reliable and valuable test suite is essential. Obviously it's a difficult task, but it's not impossible! So let's take a deeper look at a very important concept: unit test refactoring resistance. Let's understand more about this essential principle and I hope this article can help you.
If you like the content, please share and like the post! This helps and encourages me to continue bringing content in text form too!😄
What is refactoring resistance? 🧐
I think it's important to explore the word as much as possible, let's seek help from the dictionary and understand the meaning and origin of the word resist:
Action or effect of resisting, of not giving in or succumbing.
The above definition is interesting. After all, how can a test not give way in a given situation? We will understand soon. Now see the origin of the word resistance:
The word “resist” originates from the Latin word “resistentia”, which in turn is derived from the verb “resistere”. The verb "resistere" is made up of two parts: the prefix "re-", which means "against" or "again", and the verb "sistere", which means "to stand", "to stop" or "to sustain " . Therefore, the word “resistance” implies the ability to oppose or withstand something, be it a force, a stress or a change. - Source Etimonline.
We have a lot to understand here. I would like to highlight this part of the text above:
Therefore, the word “resistance” implies the ability to oppose or support something, be it a force, a stress or a change.
I would like you to quickly memorize the words “change” and “support”. How is this related to the resistance to refactoring that our tests should have, especially thinking about unit tests?
In software engineering, specifically in the context of software testing, the term “resistance” is used to describe the ability of a test to withstand changes to the code without failing, as long as the observable behavior of the software is not changed. What is expected from a test is to facilitate refactorings while maintaining existing functionality. This is an important principle that greatly benefits the test suite! After all, we want tests that fail when behavior breaks, because the observable behavior of the software is what matters to the customer and domain experts.
In the books Unit Testing Principles, Practices, and Patterns - by Vladimir Khorikov and Effective Software Testing: A Developer's Guide - by Mauricio Aniche, the authors explain the importance of applying this principle (concept) to testing. Let’s understand the topic better below!
Why is this concept so important? 🚨
To better understand the role of refactoring-resistant tests within a robust and reliable test suite, we can make an analogy.
Think of an electrician who needs to check the functioning and integrity of a home's electrical installations. Typically, the testing tool most frequently used by electricians is a multimeter, which measures voltage, current, and electrical resistance to perform tests and diagnose problems. The multimeter is designed to provide accurate and reliable results regardless of small variations in the environment, such as temperature and humidity, or changes in the configuration of the electrical circuit that do not affect its overall operation.
But let's suppose that the multimeter that an electrician uses provides inaccurate information, which requires adjustments or rounding all the time, or the electrician needs to be attentive and even perform some manual maintenance on the multimeter or adjust the device, that is, the tool that should Helping now begins to convey insecurity, distrust and a slight waste of time. No electrician wants that!
Refactoring resistance in unit testing is similar to multimeter reliability. If after each refactoring you need changes to the unit tests, this indicates that something is not right in the test suite. Weak, refactoring-sensitive tests can lead to a large number of false positives, making it harder for developers to identify and fix real problems in the code.
Just as a multimeter must provide accurate and reliable results under different conditions, refactoring-resistant unit tests must be able to withstand code changes as long as the observable behavior of the software does not change. This allows developers to modify and improve code with confidence, just as an electrician can rely on a multimeter to diagnose problems and check the integrity of electrical installations.
In addition to reliability and security, there is the time factor. Fragile tests, due to refactoring, can be very time consuming depending on the complexity and level of experience of the team in dealing with these failures. It is important to remember that testing is a tool to help us save time, not waste more and more time. Let's look at the main losses that tests with flawed characteristics bring to the software:
Difficulty maintaining code: When tests are fragile and break easily after refactoring, developers may hesitate to make changes to the code. This can lead to the accumulation of technical debt and make the code difficult to maintain in the long term.
Developers lose confidence in tests: If tests fail frequently due to a lack of refactoring resistance, we begin to distrust the quality of the tests and, by extension, the software itself. This can lead to less confidence in the effectiveness of the tests and the overall quality of the code.
Increased development time: Frequently breaking tests require developers to spend more time fixing and tuning tests rather than focusing on writing new code or improving existing functionality. This can result in delays and increased development costs.
Discouragement of refactoring: One detriment I have seen is that it discourages developers from making code improvements because they may be afraid of breaking existing tests. This can lead to a reluctance to resolve design issues and a decrease in overall software quality.
Difficulty detecting real problems: When tests are not resistant to refactoring, it can be difficult to distinguish real flaws from fake ones. This can cause developers to ignore real problems or spend time investigating problems that don't affect the software's behavior.
We can show a more practical example of this happening, imagine an inventory management system:
public class InventoryManager
{
private List<Product> _products;
public InventoryManager()
{
_products = new List<Product>();
}
public void AddProduct(Product product)
{
if (product == null)
{
throw new ArgumentNullException("Product cannot be null.");
}
_products.Add(product);
}
public List<Product> GetProducts()
{
return _products;
}
}
Now see the unit test:
public class InventoryManagerTests
{
[Fact]
public void AddProduct_ThrowsArgumentNullException_WhenProductIsNull()
{
// Arrange
var inventoryManager = new InventoryManager();
// Act
var exception = Assert.Throws<Exception>(() => inventoryManager.AddProduct(null));
// Check if the exception message is correct
Assert.Equal("Product cannot be null.", exception.Message);
}
}
This example is classic and serves very well for teaching purposes. The unit test checks the exception message to ensure it equals "Product cannot be null."
.
I've done many tests like this, and the biggest problem I always faced was the famous change in the message that would break the test. Although it may seem like a good idea at first glance, it makes the test fragile to refactoring.
If one day the message changes, the test will fail, even though the core functionality of throwing an exception still works. We want to avoid this!
So it's important to think and analyze whether or not this is essential for testing, in this case, we see that it doesn't matter what message is released, we just want the behavior to occur. The refactoring for the test could look like this:
[Fact]
public void AddProduct_ThrowsArgumentNullException_WhenProductIsNull()
{
// Arrange
var inventoryManager = new InventoryManager();
// Act & Assert
Assert.Throws<Exception>(() => inventoryManager.AddProduct(null));
}
// Method
public void AddProduct(Product product)
{
if (product == null)
{
throw new Exception("Product cannot be null!!!!"); // Any change in the message does not affect the test!
}
_products.Add(product);
}
Any change to the message does not affect the test! Maybe that still isn't enough to convince you, so I'll provide another, slightly different example.
Imagine that you are on the e-commerce team and need to implement a business rule that grants a 10% discount on orders over $100. You write a unit test to ensure the discount is applied. The test checks the discount amount by directly calling a method that calculates the discount based on the total order value. However, the implementation is fragile because if the discount application logic is refactored to consider other factors, such as product categories or special promotions during certain periods of the year, the test will fail, even if the "10% of discount on orders over $100" is still honored.
Let's look at an example of a test that seems common to write but is not resistant to refactoring:
public class OrderTests
{
[Fact]
public void OrderHasCorrectDiscount_WhenExceedingThreshold()
{
// Arrange
var order = new Order();
order.AddItem(new OrderItem { Price = 110.00 });
var expectedDiscount = 110.00 * 0.10; // 10% off
// Act
order.ApplyDiscounts(); // Assuming this method applies all relevant discounts
//Assert
Assert.Equal(expectedDiscount, order.Discount);
}
}
The OrderHasCorrectDiscount_WhenExceedingThreshold
test presents a significant problem in terms of maintainability and flexibility. The test assumes the discount is always 10% for orders over $100, which means it is tightly coupled to this specific business rule.
Coupling with Business Logic: The test is directly linked to the current business rule that all orders over $100 receive a 10% discount. If the discount policy changes, the test will fail, even if the new policy is correctly implemented.
Lack of Flexibility: The test is not flexible enough to accommodate changes in business rules, such as variable discounts based on product categories, seasonal promotions, or progressive discounts.
Logic Duplication: The test duplicates the discount calculation logic that is being tested. Ideally, the test should not replicate logic that is present in production code, as this increases the risk of parallel errors and makes the test less useful as an independent verification tool.
Difficult to Maintain: Any changes to the way discounts are applied will require the test to be updated. This can lead to costly maintenance and increase the risk of testing becoming obsolete or incorrect after changes to production code.
To resolve these issues, the test can be rewritten to verify that a discount was applied when the correct conditions are met, without locking in a specific discount amount.
public class OrderTests
{
[Fact]
public void DiscountIsApplied_WhenOrderExceedsThreshold()
{
// Arrange
var order = new Order();
order.AddItem(new OrderItem { Price = 110.00 }); // Preço acima do limiar para desconto
// Act
order.ApplyDiscounts(); // Supõe-se que este método aplica todos os descontos relevantes
// Assert
// Verifique se o desconto aplicado é diferente de zero.
Assert.NotEqual(0, order.Discount);
}
}
What if this order.Discount
field is a boolean? Even better!
public class OrderTests
{
[Fact]
public void DiscountIsApplied_WhenOrderExceedsThreshold()
{
// Arrange
var order = new Order();
order.AddItem(new OrderItem { Price = 110.00 }); // Preço acima do limiar para desconto
// Act
order.ApplyDiscounts(); // Supõe-se que este método aplica todos os descontos relevantes
// Assert
// Verifique se a propriedade booleana 'DiscountApplied' é verdadeira.
Assert.True(order.DiscountApplied, "Discount should be applied for orders exceeding the threshold.");
}
}
ATTENTION! ⚠️
Remember that the example is for educational purposes only! Unit test structure and design depend heavily on the design of the underlying code. Unit testing very much reflects the design of your code and your choices, if something doesn't smell right, it probably needs the code design needs to be reconsidered!
Additionally, the approach of testing whether a discount has been applied without checking the exact amount is useful to ensure that the high-level logic is working as expected. However, this does not replace the need for more specific unit tests that verify discount calculations. The example I provided is a small part of a system, logically discounts can vary greatly from each region, country and business context. It is up to programmers to evaluate the ideal strategies and also scenarios and behaviors that should be important or not for unit testing.
In many cases, end-to-end testing carried out by an automation framework in conjunction with the QA team may be enough to evaluate the scenario! To complement, see the test scenario below, changing the example a little.
Hypothetical business rules for voucher validation:
A voucher must have a unique code.
A voucher must be within its validity period.
A voucher may have a minimum purchase value.
A voucher may be limited to a specific number of uses.
public class VoucherTests
{
private VoucherService _service;
private IRepository<Voucher> _voucherRepository; // Repository abstraction to access voucher data.
public VoucherTests()
{
_service = new VoucherService(_voucherRepository);
}
[Fact]
public void VoucherService_Validate_ReturnsTrueForValidVoucher()
{
// Arrange
var validVoucher = new Voucher
{
Code = "VALID2023",
ExpiryDate = DateTime.UtcNow.AddDays(10), // Expiry date in the future
MinimumSpend = 50,
RemainingUses = 5
};
_voucherRepository.Add(validVoucher); // Add the voucher to the mock repository
// Act
var validationResult = _service.ValidateVoucher(validVoucher.Code, 100); // Purchase value of 100
//Assert
Assert.True(validationResult.IsValid, "Valid voucher must return valid.");
}
[Fact]
public void VoucherService_Validate_ReturnsFalseForExpiredVoucher()
{
// Arrange
var expiredVoucher = new Voucher
{
Code = "EXPIRED2023",
ExpiryDate = DateTime.UtcNow.AddDays(-1), // Expiry date in the past
MinimumSpend = 0,
RemainingUses = 1
};
_voucherRepository.Add(expiredVoucher);
// Act
var validationResult = _service.ValidateVoucher(expiredVoucher.Code, 100);
//Assert
Assert.False(validationResult.IsValid, "Expired voucher should not return valid.");
}
// Additional tests for other rules such as minimum purchase value and usage limit...
}
public class VoucherService
{
private IRepository<Voucher> _repository;
public VoucherService(IRepository<Voucher> repository)
{
_repository = repository;
}
public ValidationResult ValidateVoucher(string code, decimal purchaseAmount)
{
var voucher = _repository.FindByCode(code);
if (voucher == null || voucher.ExpiryDate < DateTime.UtcNow ||
voucher.MinimumSpend > purchaseAmount || voucher.RemainingUses <= 0)
{
return new ValidationResult { IsValid = false };
}
voucher.RemainingUses--;
_repository.Update(voucher);
return new ValidationResult { IsValid = true };
}
}
// Represents the validation result
public class ValidationResult
{
public bool IsValid { get; set; }
}
// Interface for a generic repository.
public interface IRepository<T>
{
void Add(T entity);
void Update(T entity);
T FindByCode(string code);
}
The example above illustrates a well-structured unit test that focuses on application behavior rather than specific implementation details, let's quickly talk about the main positives:
Use of Abstractions: Using the
IRepository<Voucher>
interface decouples the test service from any specific implementation of the data repository. This means the service can be tested regardless of how vouchers are stored or retrieved, allowing the repository backend to be changed without breaking the test.Focus on the Final Result: The test verifies the final result of the validation operation (IsValid), rather than verifying how this result was achieved. This is an example of the "Black Box Testing" principle, where testing is not concerned with the internal details of the method being tested.
Descriptive and Behavioral Tests: The test method names describe the expected behavior of the system:
ReturnsTrueForValidVoucher
andReturnsFalseForExpiredVoucher
. This makes it easier to understand the purpose of the test and what it is checking.Resistance to Implementation Changes: As the test only checks the validity of the return (IsValid), internal changes to the ValidateVoucher method that do not affect the observable behavior will not require changes to the tests. For example, if the internal validation method added new validation rules or changed the order of existing checks, the test would still pass as long as the external behavior remained consistent.
When you can refactor without fear, you know your tests aren't just passing, they're understanding.
Now, remember I mentioned earlier that weak tests result in false positives? Well, let's talk about that now!
Beware of false positives! 🔍
It is important to first understand what a false positive is. Here is the definition:
A false positive is a false alarm. It occurs when a test fails even though the tested functionality is working correctly, that is, the test indicates a problem that does not exist. This can lead to a waste of time and resources as developers have to investigate and fix these non-existent “issues”.
How do false positives occur? Typically, after refactoring is done, note that to be considered a false alarm, the behavior of the resource must remain the same! So, can we say that there is a relationship between false positives and resistance to refactoring? Let's understand this with another analogy:
Imagine you are using a rapid COVID-19 test to see if you are infected with the virus. These tests are designed to be quick and convenient, but assume that the test you are using is very sensitive to variations in temperature and humidity, which is not uncommon for chemical reactions. On a particularly wet or cold day, the test indicates a positive result for COVID-19, even if you are not actually infected. This would be a false positive, caused not by the presence of the virus, but by environmental conditions.
You follow the guidelines and self-isolate, perhaps even canceling important appointments or taking medication unnecessarily. However, when you repeat the test under controlled conditions or with a different test that is not affected by your environment, you discover that you are, in fact, healthy and do not have the virus.
Refactoring resistance in unit tests is similar to the reliability of these COVID-19 tests. If a unit test is highly sensitive to changes in the code that do not affect functionality (such as refactorings), it may fail and indicate that there is a problem when in fact the behavior of the code remains correct. This can lead to wasted time and resources, as well as unnecessary isolation in the case of COVID-19 testing.
Just as we want COVID-19 tests to be reliable and not influenced by irrelevant external factors, we want our unit tests to be resistant to refactorings and code changes that do not change the intended behavior. This ensures that tests only fail when there is a true problem with the functionality, and not due to benign changes to the code structure.
For example, a test that directly accesses the private fields of a class may fail if the internal structure of the class is changed, even if the functionality itself is not affected. So is testing private methods considered bad practice? See the next topic.
Does testing private methods cause false positives?
Yes! Why? Testing private methods can lead to false positives because they are considered an internal, non-visible part of the code that should not be accessed directly.
When we test private methods or those that should be private, we violate the concept of abstraction and data encapsulation, which can lead to future changes in the implementation of these methods that can affect the behavior of your tests.
Additionally, testing private methods makes your tests dependent on the internal implementation of those methods, which means that if the implementation changes, the tests will fail. Let's look at an example:
// All methods are exposed, this is not good!
public class NameValidator
{
public List<string> InvalidNames { get; set; }
public NameValidator()
{
InvalidNames = new List<string> { "test", "admin", "user" };
}
public bool IsValidName(string name)
{
return IsNameLengthValid(name) && NameDoesNotContainNumbers(name) && NameIsNotInInvalidList(name);
}
public bool IsNameLengthValid(string name)
{
return name.Length >= 2 && name.Length <= 100;
}
public bool NameDoesNotContainNumbers(string name)
{
foreach (char c in name)
{
if (Char.IsDigit(c)) return false;
}
return true;
}
public bool NameIsNotInInvalidList(string name)
{
return !InvalidNames.Contains(name.ToLower());
}
}
The NameValidator
class methods must be private because they are implementation details. This can affect your unit tests because you may end up testing these methods directly instead of testing the general behavior of the name validator. Ideally, you should ensure proper encapsulation of implementation details and test only the public functionality of the class. Therefore, it is essential to know the right time to use private methods. See refactoring the class:
using System;
using System.Collections.Generic;
public class NameValidator
{
private List<string> InvalidNames { get; set; }
public NameValidator()
{
InvalidNames = new List<string> { "test", "admin", "user" };
}
public bool IsValidName(string name)
{
return IsNameLengthValid(name) && NameDoesNotContainNumbers(name) && NameIsNotInInvalidList(name);
}
private bool IsNameLengthValid(string name)
{
return name.Length >= 2 && name.Length <= 100;
}
private bool NameDoesNotContainNumbers(string name)
{
foreach (char c in name)
{
if (Char.IsDigit(c)) return false;
}
return true;
}
private bool NameIsNotInInvalidList(string name)
{
return !InvalidNames.Contains(name.ToLower());
}
}
Cool, but how can hiding implementation details of a class avoid false positives in tests? When we refactor the class and make the validation methods private, you are focusing on unit tests that verify the expected behavior of the class as a whole, rather than testing the internal implementation.
By making the methods private and focusing on unit testing the public method, you minimize the chance of false positives. This is because you will be testing the combination of all validation rules together, not a specific function. With this approach, the test will only pass if all validation rules are working correctly together, ensuring that the overall logic of the class is working as expected.
If the developer keeps everything in the class as directly accessible, i.e. public
, we may fall into false positives because the developer could test each method individually.
This situation can lead to a false positive because the individual internal method test may pass, but the full class validation may not be working as it should!
If you don't understand, don't worry. Let's go! For example, a unit test for IsNameLengthValid
might check that the name is between 2 and 100 characters and pass. A test for NameDoesNotContainNumbers
can check that the name does not contain numbers and also pass. And a test for NameIsNotInInvalidList
can check that the name is not in the list of invalid names and pass as well.
However, these tests do not guarantee that the IsValidName
method works correctly as a whole. For example, if there is additional logic in the IsValidName method that is not being covered by the tests for the private methods, or if the way these methods are combined in the IsValidName
is changed, this can lead to unexpected results that individual unit tests would not capture.
To demonstrate the risk and danger of testing only private methods individually, let's consider the following hypothetical situation:
Suppose the programmer decides to add a new validation rule to the IsValidName
method that checks that the name does not begin with a special character. The developer updates the IsValidName
method, but forgets to create a unit test for this new rule. Existing tests for the private methods IsNameLengthValid, NameDoesNotContainNumbers
, and NameIsNotInInvalidList
will still pass, because they are not affected by the change:
public bool IsValidName(string name)
{
return IsNameLengthValid(name) &&
NameDoesNotContainNumbers(name) &&
NameIsNotInInvalidList(name) &&
NameDoesNotStartWithSpecialCharacter(name); // Nova regra adicionada
}
private bool NameDoesNotStartWithSpecialCharacter(string name)
{
// Suppose this is the new validation rule
return !char.IsPunctuation(name[0]);
}
Now let's see how a unit test for the IsValidName
method could be written to cover all validation rules:
using Xunit;
public class NameValidatorTests
{
[Theory]
[InlineData("John", true)] // Valid name
[InlineData("Jo", true)] // Valid name with minimum length
[InlineData("J", false)] // Name too short
[InlineData("John123", false)] // Name contains numbers
[InlineData("admin", false)] // Name is on the invalid list
[InlineData("!John", false)] // Name starts with a special character
public void IsValidName_ReturnsCorrectResult(string name, bool expected)
{
// Arrange
var validator = new NameValidator();
// Act
var result = validator.IsValidName(name);
//Assert
Assert.Equal(expected, result);
}
}
If the writer of this test had ensured that only the private methods were tested individually, the test for the new NameDoesNotStartWithSpecialCharacter
rule might have been overlooked. However, when testing the public IsValidName
method, which is the entry point for the name validation functionality, any changes or additions to validation rules will be captured.
If the test fails for the test case with "!John", this indicates that there is something wrong with the implementation of the new validation rule, or that there is a regression in the expected behavior of the IsValidName
method. This highlights the importance of testing the expected behavior of the class through its public methods, ensuring that all validation rules work together as expected, and avoiding false positives that can arise when testing only private methods individually.
Robust tests don't scream with every change of wind; they whisper the correct directions during refactoring.
This way we guarantee that the NameValidator
class is working correctly without worrying about internal implementation details. What else can generate false positives? Let's quickly talk about unnecessary claims.
Unnecessary Asserts! 😅
Can having too many assertions make tests fragile for refactoring? Depending on the test and the behavior you are trying to address, yes! Especially if we are exposing implementation details or details that are not relevant to testing. We can list some reasons:
Over-coupling: When a unit test has too many assertions, it tends to be over-coupled to the implementation details of the code. This means that any small change to the implementation, even if it doesn't affect the desired behavior, can cause the test to fail.
Readability and maintainability become difficult: Tests with many assertions can be more difficult to read and maintain, especially if they are not well structured. This can lead to fragile and error-prone tests, increasing the chance of false failures (false positives) after refactorings that do not affect the observable behavior of the software.
Development time increases: If it is difficult to identify where the problem is, the time to fix it increases, which creates stress and mental exhaustion for developers.
Let's look at a brief example, consider the test below:
public class OrderTests
{
[Fact]
public void CreateOrder_ValidInput_CreatesOrderWithCorrectPropertiesAndTotal()
{
// Arrange
var customerId = 1;
var orderItems = new List<OrderItem>
{
new OrderItem { ProductId = 101, Quantity = 2, Price = 10 },
new OrderItem { ProductId = 102, Quantity = 1, Price = 5 }
};
// Act
var order = new Order(customerId, orderItems);
// Assert
Assert.Equal(customerId, order.CustomerId);
Assert.Equal(2, order.OrderItems.Count);
Assert.Equal(101, order.OrderItems[0].ProductId);
Assert.Equal(2, order.OrderItems[0].Quantity);
Assert.Equal(10, order.OrderItems[0].Price);
Assert.Equal(102, order.OrderItems[1].ProductId);
Assert.Equal(1, order.OrderItems[1].Quantity);
Assert.Equal(5, order.OrderItems[1].Price);
Assert.Equal(25, order.Total);
}
}
The test checks if CustomerId and Total are correct, the test also checks each property of the order items. These additional assertions are unnecessary for the test, making it more sensitive to refactorings and more difficult to read and maintain. If you write tests this way, you might want to reconsider tests that focus on a few assertions. The less detail we put into our tests, the better! This will bring many benefits to the project's final test suite, which we can mention:
Makes debugging easier: When a test fails, it is easier to identify the cause if there is only one reason for the failure. With fewer assertions, you can quickly identify the specific behavior being tested and find the cause of the failure.
Improves readability: Tests with fewer assertions are generally simpler and easier to read. This allows other developers to quickly understand the purpose of the test and the expected behavior of the code.
Focused testing: Avoiding too many assertions helps keep testing focused on a single behavior or unit of functionality. This makes tests more robust and less likely to fail due to unrelated code changes.
Maintenance becomes easier: Tests with fewer assertions are generally easier to maintain because they are less tied to code implementation details. This means that when you need to make changes to your code, you are less likely to need to make corresponding changes to your tests.
Brings more confidence in the test suite: A well-structured test suite with focused tests and fewer Asserts increases confidence in the effectiveness of the tests. This can lead to faster and more accurate detection of code issues, improving overall software quality.
Try putting these tips into practice the next time you're writing tests, aim for simplicity first. Tests should be useful tools that speed up our work!
Evaluating User Message Exposure in Unit Test Asserts
Text messages that are presented to the end user in an application play an important role in the user experience. They can be instructions, error messages, confirmations, among others. In unit testing, checking these messages can be crucial to ensure that software communication is correct and aligned with end-user expectations and requirements.
When to Include Messages in Asserts
1. Message Consistency is Critical: When message accuracy is vital, such as in legal messages, terms of service, or safety instructions, asserters must verify that the message is exactly as it should be.
2. Message-Dependent Business Flow: If the message is part of the business flow and any change could confuse the user or change the interpretation of a functionality, it is important to include verification of this in the test.
3. Message Stability: If messages rarely change or are controlled by a team that understands the importance of stability in the user interface, asserts can be used to ensure that they remain unchanged.
When to Avoid Messages in Asserts
1. Messages Subject to Frequent Change: If the text is frequently changed to improve UX/UI or for marketing reasons, avoid direct assertions, as this can lead to heavy test maintenance.
2. Internationalization and Localization: For applications that are translated into multiple languages, testing exact messages can be impractical and inefficient. In this case, it is preferable to test error codes or message identifiers.
3. Message Personalization: If messages are personalized for the user or configurable by customers, it is better to test the personalization logic rather than the exact content.
Assertive Strategies
1. Use Codes or Identifiers: Instead of full textual messages, test error codes or message identifiers that are less prone to change and are easier to verify.
2. Abstract Messages: Use message abstraction resources, such as resource files or constants, which can be referenced in tests, minimizing the impact of text changes.
3. Test Structure, Not Content: Confirm the structure of the message (for example, whether it contains a link or button) rather than the exact content, which allows flexibility in writing without sacrificing the integrity of the test.
Conclusion
In conclusion, the primary objective of unit testing is to assure that as the system evolves, the functionality remains consistent and reliable. Tests that are resistant to refactoring play a critical role in achieving this objective. They provide a safety net that allows developers to improve the design, structure, and performance of the code without fear of inadvertently introducing regressions or bugs.
Refactoring resistance is essential for maintaining the long-term value of tests. When tests are tightly coupled to the implementation details, any change in the code, no matter how trivial, can cause the tests to fail, leading to a false alarm that wastes developer time and erodes trust in the test suite. Instead, by designing tests that focus on the behavior and outcomes - what the code should do, rather than how it does it - we ensure that they remain relevant and informative, regardless of the internal changes to the codebase.
Moreover, refactoring-resistant tests facilitate a more agile and adaptable development process. They empower developers to refactor code with confidence, knowing that the tests will continue to guard against regressions. This is particularly important in modern development practices where refactoring is not just a one-time task but an ongoing process of incremental improvements.
Ultimately, the true value of a test is not determined by its ability to pass but by its capability to signal when the intended behavior of the system has changed. Tests that are resistant to refactoring maintain their value over time, helping teams to deliver software that is not only functional today but also remains robust and easy to maintain in the future. They ensure that tests are not just ticking boxes but are actively contributing to the quality and integrity of the software development lifecycle.
If you enjoy the content, please share and leave a like on the post!😄