[Photo by Terence Burke on Unsplash, modified (cropped)]
In the alternative tutorial 04 we discovered that our tests were not enough, and we left with the promise to take care of that. Here we are, honoring that promise.
Of course, we will not talk about just testing and testing forever... In fact we'll be strengthening security on our system by enforcing uniqueness of emails.
The code for this tutorial can be found in this repository: github.com/davidedelpapa/rocket-tut, and has been tagged for your convenience:
git clone https://github.com/davidedelpapa/rocket-tut.git
cd rocket-tut
git checkout tags/tut5
A promise is a promise: more on tests
Hold your horses, cowboy! First we need to honor the promise we made last time.
I'm still in the "Redis branch" of the repo: we'll change things here, test properly, and then I'll show a nice git trick, if you don't already know it, to merge back just the tests to master (where there's the MongoDB version).
We have to do two things: delete each user we insert in testing, and make sure that each email is unique.
As for the first thing we have to make sure that EACH user we insert is unique (i.e., no copy-pasta, even though I'm Italian and I love pasta)
The following is useful in almost any case we insert a user first:
if response.status() == Status::Ok {
let res = client.delete(format!("/api/users/{}", id))
.header(ContentType::JSON)
.body(r##"{
"password": "123456"
}"##)
.dispatch();
assert_eq!(res.status(), Status::Ok);
}
Of course we have to fill in the right password.
In tests/basic_test.rs, in new_user_rt_test()
we do not extract the user id at all, so we have to pass it on the fly, like so:
client.delete(format!("/api/users/{}", id))
In tests/failures_test.rs instead, in id_user_rt_fail()
we have to extract the info of the id from the user.
However, remember that in tests/failures_test.rs we make the responses fail on purpose (it's our fails test); however we check always that the insertion was correct.
So, there's no need to check again if the insertion was correct, and we can take away the if
altogether.
Instead in info_user_rt_fail()
, before we forge a fake id, we need to clone()
it:
let mut id = user_new.id.clone();
so that later on we can reuse user_new.id
:
if response.status() == Status::Ok {
let res = client.delete(format!("/api/users/{}", user_new.id))
.header(ContentType::JSON)
.body(r##"{
"password": "123456"
}"##)
.dispatch();
assert_eq!(res.status(), Status::Ok);
}
In tests/persistency_test.rs too we do not extract the id info from the response; we have also to remember that we cannot access anymore the first client,so we have to just call client2
:
let res = client2.delete(format!("/api/users/{}", user.id))
And that is it: if we adapt a little the code and the passwords.
Now we can test everything and cleanup after our mess. Well done.
Even checking in the Redis part all the keys present in the DB 1 we get:
127.0.0.1:6379[1]> keys *
(empty list or set)
In fact, the lookup key gets also destroyed once its values are depleted.
Now that everything is fixed we'd like to commit those changes to the Redis branch, but also bring them to the MongoDB branch (assuming it's master; if it's main correct the following as needed). No worries we can do that easily
First commit all changes to the Redis branch; then we move back to master
~$ git checkout master
Switched to branch 'master'
~$ git checkout tut04alt tests/basic_test.rs tests/failures_test.rs tests/persistency_test.rs
Updated 3 paths from 69b3ab7
That is, on master we give the git branch <name_of_branch_to_copy_from> <path/to/file> <path/to/file> ...
and git will stage for us in master the modifications to the files from the other branch we want copy also on the current branch.
Let's check now:
~$ git status
On branch master
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
modified: tests/basic_test.rs
modified: tests/failures_test.rs
modified: tests/persistency_test.rs
Sorry for this interlude about git-foo, but it is convenient to know while working on many branches at the same time. Now we can commit on master as well.
Second promise: Unique emails
Now we need a new test to check whether our system accepts only one unique email or not.
It's a case fit for failure so, we modify our tests/failures_test.rs
#[test]
fn unique_emails_insertion_fail(){
let client = common::setup();
// First user with its email
let mut response_new_user = client.post("/api/users")
.header(ContentType::JSON)
.body(r##"{
"name": "Jared Doe",
"email": "jthebest@m.com",
"password": "123456"
}"##)
.dispatch();
// We have to make sure this does not fail because of wrong new user insertion
assert_eq!(response_new_user.status(), Status::Ok);
assert_eq!(response_new_user.content_type(), Some(ContentType::JSON));
let response_body = response_new_user.body_string().expect("Response Body");
let user: ResponseUser = serde_json::from_str(&response_body.as_str()).expect("Valid User Response");
// Second user with the same email
let mut response_second_user = client.post("/api/users")
.header(ContentType::JSON)
.body(r##"{
"name": "Joy Doe",
"email": "jthebest@m.com",
"password": "qwertyuiop"
}"##)
.dispatch();
assert_ne!(response_second_user.status(), Status::Ok);
assert_eq!(response_second_user.content_type(), Some(ContentType::JSON));
assert_eq!(response_second_user.body_string(), Some("\"email already in use\"".to_string()));
// Cleanup
let res = client.delete(format!("/api/users/{}", user.id))
.header(ContentType::JSON)
.body(r##"{
"password": "123456"
}"##)
.dispatch();
assert_eq!(res.status(), Status::Ok);
}
In the above we insert a user and then attempt to insert another one with the same email as the first.
In the code we've added also another test, unique_emails_update_fail()
, for the PUT
route, to update the user. This test inserts two different users and is set to fail when updating the first user we use the same email as the second user.
If we run cargo test
of course it will fail.
test unique_emails_insertion_fail ... FAILED
test unique_emails_update_fail ... FAILED
Moreover, now we will have to clean the DB by hand as well.
Just for your info, I'm not fan of any software specifically, but if you do not really know how to use MongoDB and want a GUI to manage it, I'm using Robo 3t by robomongo. I'm not affiliated to nobody, just saying that which I happen to use, but there are other software out there, as well, just search for them.
Back to business, we have some emails to render unique.
Unique fields in MongoDB
We need to enforce an index out of a field, so that MongoDB will index by id, as well as by that index (in this case email).
There's not as yet this function on the mongodb
crate. Besides, which part of the code should create it?
The only thing to do is to run the command on a MongoDB shell.
db.getCollection('users').createIndex( { "email": 1 }, { unique: true } )
I run it on Robo3t for example.
OK, there's technically another way to do it, in a more elegant way, and in a way that it is all done in Rust (think of deploying this quickly by an inexperienced user, without having to setup just about everything). We could create a lookup collection, in the same way we did with Redis, and use the email as ID, while setting as only field the User ID.
The data structure will look like this:
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct EmailLookup {
#[serde(rename = "_id")]
pub email: String,
pub user_id: Uuid,
}
Serde will rename emailk
to _id
, and MongoDB will use this field as its id, instead of creating an id automatically. I mean, its the same technique we used for User
.
You just put it in src/data/db.rs and you are set to go.
The procedure for insertion will be thus:
- Check the lookup table: if the field already exists, let the insertion fail.
- Insert the new user regularly, and extrapolate the ID
- Insert the user by email in the lookup collection.
At this point we could change the retrieval of the get by email, but I do not advise doing so, because it would mean to first extract the ID, then use it to get the user by ID... oh my.
BIG WARNING: in MongoDB the _id
(whatever its content) is immutable: when the user updates the email, you have first to delete the field, then create another one. Not that difficult, but keep an eye on it.
Anyway let's discard the idea of a separate lookup collection in this project (it's always up to you to implement it, if you want, I think we covered enough together for you at least to try).
We go for the quick-fix command described above. This leave us with the only task of getting the right error when inserting a new user.
In MongoDB's docs it is written that a non unique index key would result in an error of kind writeError
with code 11000
. We have to get this same Error definition in mongodb crate.
It is in mongodb::coll::error::WriteError
:
pub struct WriteError {
pub code: i32,
pub message: String,
}
Now we know we have to match that same error code. Let's find in src/routes/user.rs when we insert a new user:
match user_coll.insert_one(document.to_owned(), None) {
Ok(inserted) => {
match inserted.inserted_id {
Some(id) => { ...},
None => ApiResponse::internal_err(), // here!
}
},
Err(_) => ApiResponse::internal_err(), // not here!
}
Why there and not on the error? Because the error is when there's not an answer from the MongoDB server, but an answer WITH an error is covered somewhere else. Actually, the response is a
pub struct InsertOneResult {
pub acknowledged: bool,
pub inserted_id: Option<Bson>,
pub write_exception: Option<WriteException>,
}
You can see that there's the option with the id inserted, but if that is None
then the write_exception
contains the error. PS: acknowledged contains how many records have been written, so in this case that is 0
since it is an insert_one
; if it were a bulk insertion it could have had info of fields written and fields with exceptions...
Let's get back on track:
We have to change the None
where it says // here!
to:
None => match inserted.write_exception {
Some(wite_error) =>{
match wite_error.write_error {
Some(err) =>{
match err.code {
11000i32 => ApiResponse::err(json!("email already in use")),
_ => ApiResponse::internal_err(),
}
},
None => ApiResponse::internal_err(),
}
},
None => ApiResponse::internal_err(),
}
As for the other route we have to check, update_user_rt()
, we just need to briefly check that the email does not yet exist, right after we authenticate the password.
[...]
if found_user.match_password(&user.password) { // After this
let insertable = found_user.update_user(&user.name, &user.email);
[...]
We'll borrow the find_one()
scheme from id_user_rt()
(that finds users through the email): if we find one we'll sen the error about the email already in use, otherwise we'll let everything work normally.
if found_user.match_password(&user.password) {
// Check the email does not yet exist
match user_coll.find_one(Some(doc! { "email": &user.email }), None) {
Ok(mail_query_result) => {
match mail_query_result {
Some(_) => { return ApiResponse::err(json!("email already in use")); },
None => ()
}
},
Err(_) => { return ApiResponse::internal_err(); }
}
let insertable = found_user.update_user(&user.name, &user.email);
Now we have implemented in both routes the "email already in use"
error. We should build and test (fingers crossed).
test unique_emails_insertion_fail ... ok
test unique_emails_update_fail ... ok
I can consider that a personal win.
Uniqueness in Redis
Let's commit, switch over to Redis and bring the new test in there... And let's hope the fix will be quick.
git push origin master
git checkout tut04alt
git checkout master tests/failures_test.rs
As for Redis the only way of enforcing uniqueness is through a set
, that is a container of unique objects.
Example:
$ redis-cli
127.0.0.1:6379> sadd mykey "one"
(integer) 1
127.0.0.1:6379> sadd mykey "two"
(integer) 1
127.0.0.1:6379> sadd mykey "one"
(integer) 0
127.0.0.1:6379> smembers mykey
1) "two"
2) "one"
127.0.0.1:6379>
The integer answer is the number of members added (can be more than one at a time). When re-adding the same member we see that it returns 0
, and checking the list of members we see that there are no duplicates.
Let's implement it in code. Briefly we have to:
- Add a email to a set each time it is serialized a user. We do not need per se to check for failures there, in fact, we should check for failures beforehand. But anyhow, it is better to check.
- Before each insert, we need to make sure that the email does not already exist.
- Before updating with a new email we need to check that it is not already in use.
- We have to cleanup the email before updating it, and also change password (because the user gets re-inserted). Also we have to cleanup he email once we remove the user.
As to the point 1, we change the to_redis()
method to look like:
fn to_redis(self, connection: &mut Conn) -> AnyResult<()> {
let id = self.id.to_string();
let email = self.email.clone();
let r_user = [
("name", self.name),
("email", self.email.to_lowercase()),
("hashed_password", self.hashed_password),
("salt", self.salt),
("created", self.created.to_string()),
("updated", self.updated.to_string())
];
connection.hset_multiple(&id, &r_user)?;
// Enforce email uniqueness
let res_enforce: i32 = connection.sadd(UNIQUE_EMAIL_SET, email.clone())?;
// Add email lookup index
if res_enforce != 0 {
let _ = connection.zadd(LOOKUP, format!("{}:{}", email, id), 0)?;
} else {
bail!("email already in use");
}
Ok(())
}
We have to render all email lowercase, because sadd
is case sensitive.
We create a new User
method to check uniqueness as well:
fn is_unique_email(self, connection: &mut Conn) -> AnyResult<bool> {
let res_enforce: Result<i8, _> = connection.sismember(UNIQUE_EMAIL_SET, self.email);
match res_enforce {
Ok(res) => {
if res == 0 { return Ok(true); }
return Ok(false);
},
Err(_) => Err(anyhow!("Connection error")),
}
}
I'm not going to discuss all the other routes, I think it is easy to check them, but this is the POST
route:
#[post("/users", format = "json", data = "<user>")]
pub fn new_user_rt(mut connection: Conn, user: Json<InsertableUser>) -> ApiResponse {
let ins_user = User::from_insertable((*user).clone());
match ins_user.clone().is_unique_email(&mut connection) {
Ok(res) => {
match res {
true => {
match ins_user.clone().to_redis(&mut connection){
Ok(_) => ApiResponse::ok(json!(ResponseUser::from_user(&ins_user))),
Err(_) => ApiResponse::internal_err(),
}
},
false => ApiResponse::err(json!("email already in use")),
}
},
Err(_) => ApiResponse::internal_err(),
}
}
Easy, isn't it?
A quick build&run confirms that everything is as it is supposed to be.
Conclusions
I think we can call it a day, although we didn't lean anything new about Rocket or about integrating it with other software, as we did in the last tutorials.
Besides, learning how to strengthen security on our systems by enforcing uniqueness of a field is a skill useful not just to this specific case: there are many cases in which fields have to be unique and index-like, independently of the real index.
Next time I will introduce a new concept to work with Rocket, and we will use it to authenticate our Users to the platform (finally).
So, stay tuned!
Top comments (0)