Nuke helps us achieve a flexible, maintainable, automated build process. However, nothing stops us from automating any task we perform during our day-to-day work, especially if it is manual and repetitive. Thus, we would like to share other tasks we typically automate. All the code shown in this post is available here.
Setting up your local development environment
Most applications need external dependencies to function properly. Dependencies like databases, message brokers, etc., are usually shared in the development environment. While this practice is not wrong, it may lead to unexpected outcomes if two or more developers use the same resources in parallel. So, why not run all your dependencies locally using tools like Docker? We know there aren't Docker images for all the dependencies we might use, like Azure Service Bus or Splunk, but with good abstractions in the middle, like Rebus or Serilog, it will be easy to switch between them depending on what environments we're in. The following example shows how to run a SQL Server, RabbitMQ and Seq:
string DockerPrefix = "WebAPI";
string SQLPort = "1033";
string SQLPassword = "Sqlserver123$";
Target RunOrStartSQLServer => _ => _
.Description($"Run SQLServer on port {SQLPort}")
.Executes(() =>
{
try
{
DockerRun(x => x
.SetName($"{DockerPrefix}-sqlserver")
.AddEnv("ACCEPT_EULA=Y", $"MSSQL_SA_PASSWORD={SQLPassword}")
.SetImage("mcr.microsoft.com/mssql/server:2019-CU14-ubuntu-20.04")
.EnableDetach()
.SetPublish($"{SQLPort}:1433")
);
}
catch (Exception)
{
DockerStart(x => x
.AddContainers($"{DockerPrefix}-sqlserver")
);
}
});
string SEQPort = "5342";
Target RunOrStartSeq => _ => _
.Description($"Run Seq on port {SEQPort}")
.Executes(() =>
{
try
{
DockerRun(x => x
.SetName($"{DockerPrefix}-seq")
.AddEnv("ACCEPT_EULA=Y")
.SetRestart("unless-stopped")
.SetImage("datalust/seq:latest")
.EnableDetach()
.SetPublish($"{SEQPort}:80")
);
}
catch (Exception)
{
DockerStart(x => x
.AddContainers($"{DockerPrefix}-seq")
);
}
});
string RabbitMQUser = "admin";
string RabbitMQPassword = "Rabbitmq123$";
string RabbitMQAdminPort = "15671";
string RabbitMQPort = "5671";
Target RunOrStartRabbitMQ => _ => _
.Description($"Run RabbitMQ on port {RabbitMQPort}")
.Executes(() =>
{
try
{
DockerRun(x => x
.SetName($"{DockerPrefix}-rabbitmq")
.SetHostname($"{DockerPrefix}-host")
.AddEnv($"RABBITMQ_DEFAULT_USER={RabbitMQUser}", $"RABBITMQ_DEFAULT_PASS={RabbitMQPassword}")
.SetImage("rabbitmq:3-management")
.EnableDetach()
.AddPublish($"{RabbitMQAdminPort}:15672")
.AddPublish($"{RabbitMQPort}:5672")
);
}
catch (Exception)
{
DockerStart(x => x
.AddContainers($"{DockerPrefix}-rabbitmq")
);
}
});
Target StartEnv => _ => _
.Description("Start the development environment")
.DependsOn(RunOrStartSQLServer)
.DependsOn(RunOrStartSeq)
.DependsOn(RunOrStartRabbitMQ)
.Executes(() =>
{
Serilog.Log.Information("Development env started");
});
To set up our local environment, we only need to execute nuke StartEnv
. Pretty impressive, right? In the production environment, the final dependencies are Azure SQL Server, Azure Service Bus, and Loggly. In the same way, we can have a command to remove all these dependencies from our environment.
Target StopSQLServer => _ => _
.Executes(() =>
{
DockerStop(x => x
.AddContainers($"{DockerPrefix}-sqlserver")
);
});
Target StopSeq => _ => _
.Executes(() =>
{
DockerStop(x => x
.AddContainers($"{DockerPrefix}-seq")
);
});
Target StopRabbitMQ => _ => _
.Executes(() =>
{
DockerStop(x => x
.AddContainers($"{DockerPrefix}-rabbitmq")
);
});
Target StopEnv => _ => _
.Description("Stop the development environment")
.DependsOn(StopSQLServer)
.DependsOn(StopSeq)
.DependsOn(StopRabbitMQ)
.Executes(() =>
{
Serilog.Log.Information("Development env stopped");
});
Target RemoveSQLServer => _ => _
.DependsOn(StopSQLServer)
.Executes(() =>
{
DockerRm(x => x
.AddContainers($"{DockerPrefix}-sqlserver")
);
});
Target RemoveSeq => _ => _
.DependsOn(StopSeq)
.Executes(() =>
{
DockerRm(x => x
.AddContainers($"{DockerPrefix}-seq")
);
});
Target RemoveRabbitMQ => _ => _
.DependsOn(StopRabbitMQ)
.Executes(() =>
{
DockerRm(x => x
.AddContainers($"{DockerPrefix}-rabbitmq")
);
});
Target RemoveEnv => _ => _
.DependsOn(RemoveSQLServer)
.DependsOn(RemoveSeq)
.DependsOn(RemoveRabbitMQ)
.Description("Remove the development environment")
.Executes(() =>
{
Serilog.Log.Information("Development env removed");
});
Run nuke RemoveEnv
to leave no traces of dependencies in our local environment.
Keeping the database up to date
If we work with a database, eventually, we will need to execute scripts on it. Those scripts could be run by a database administrator, in the actual environment, or by us locally. Another option is to utilize an application that can be executed during deployment(or locally) to accomplish this task. In our case, we chose the last option: a separate application responsible for running all the database scripts each time we run it. The following code shows how to automate it to run as a command:
[Parameter("Connection string to run the migration")]
public string ConnectionString;
AbsolutePath SourceDirectory => RootDirectory / "src";
AbsolutePath TestsDirectory => RootDirectory / "tests";
AbsolutePath PublishDirectory => RootDirectory / "publish";
AbsolutePath ArtifactDirectory => RootDirectory / "artifact";
Target CleanMigrator => _ => _
.Executes(() =>
{
EnsureCleanDirectory(PublishDirectory / MigratorProject);
});
Target CompileMigrator => _ => _
.DependsOn(CleanMigrator)
.Executes(() =>
{
DotNetBuild(s => s
.SetProjectFile(SourceDirectory / MigratorProject)
.SetConfiguration(Configuration));
});
Target PublishMigrator => _ => _
.DependsOn(CompileMigrator)
.Executes(() =>
{
DotNetPublish(s => s
.SetProject(SourceDirectory / MigratorProject)
.SetConfiguration(Configuration)
.EnableNoBuild()
.EnableNoRestore()
.SetOutput(PublishDirectory / MigratorProject));
});
Target RunMigrator => _ => _
.DependsOn(PublishMigrator)
.Description("Apply the scripts over the database")
.Executes(() =>
{
var defaultEnvironmentVariables = Environment.GetEnvironmentVariables()
.Cast<DictionaryEntry>().ToDictionary(entry=>entry.Key.ToString(), entry=>entry.Value.ToString());
var environmentVariables = new Dictionary<string, string>(defaultEnvironmentVariables);
if (!string.IsNullOrEmpty(ConnectionString))
{
environmentVariables.Add("DbConnectionString", ConnectionString);
}
DotNet(PublishDirectory / MigratorProject / $"{MigratorProject}.dll", environmentVariables: environmentVariables);
});
Running nuke RunMigrator
will keep always our database up to date.
Running Tests
Several kinds of tests we can run here: unit tests, integration tests, end-to-end tests, etc. These can be run only locally and/or as part of our build process. In our case, to get a quick feedback loop, we decided to use integration tests running locally as part of our development process before pushing code to the repository.
Target Clean => _ => _
.Executes(() =>
{
EnsureCleanDirectory(ArtifactDirectory);
EnsureCleanDirectory(PublishDirectory / WebAPIProject);
});
Target Compile => _ => _
.DependsOn(Clean)
.Executes(() =>
{
DotNetBuild(s => s
.SetProjectFile(SourceDirectory / WebAPIProject)
.SetConfiguration(Configuration));
});
Target Test => _ => _
.Description("Run integration tests")
.DependsOn(Compile)
.Executes(() =>
{
DotNetTest(s => s
.SetProjectFile(TestsDirectory / TestsProject)
.SetConfiguration(Configuration));
});
Running the nuke Test
will eliminate the need for most local manual testing during our development process.
The deployment
Here we saw how to create a target to deploy to Azure, but I would like to leave another idea that can be used locally as well. Working with Kubernetes can be challenging at first, but once you get used to it, it offers numerous advantages. One is the possibility of having a local Kubernetes cluster where we can deploy our application before going to any environment. Therefore, it can be an option to automate.
We believe that automating the majority of repetitive tasks in our day-to-day work can save significant time, allowing us to focus on more valuable things. Nuke makes this job easier for us, so go for it. Thanks, and happy coding.