Help

This post is about validation, Bean Validation specifically. And the what it solves. It is a reaction to a post by Julien Tournay which basically claims - if I sum it up - that:

  • Scala is faster than Java (he is coming from far far away on that one - du diable vauvert as we say in French)
  • Bean Validation is kind of crap
  • The new Play Unified Validation API is awesome

I usually am old and wise enough not to answer these kind of posts. But I want to feel the blood of my youth again damn it! After all I am from an era when The Server Side was all the rage, trolls had real teeth, things were black or white. That was good fun.

Why the You missed the point like Mars Climate Orbiter title? With a sensationalist title like Scala is faster than Java, I had to step up my game. Plus Mars Climate Orbiter missed Mars due to a conversion error and we will talk about conversion today.

I won't refute things point by point but rather highlight a few fundamental misunderstandings and explain why things are like they are in Bean Validation and why it makes sense.

Java was limiting

IMHO, @emmanuelbernard a créer son API avec les outils a sa disposition - @skaalf
IMHO, @emmanuelbernard has created his API with the tools he had at his disposal - @skaalf

We certainly pushed the boundaries of Java when both CDI and Bean Validation were designed and I wished I had more freedom here and there. But no Bean Validation is like it is because that's the most natural way validation is expressed in the Java ecosystem for what we wanted to achieve. It is no good to offer something that does not fit into the ecosystem.

For example, in Java, people use mutable objects. Mutable believe it or not is not a swear word in this language. My understanding is that Play Unified Validation API makes use of Scala's community incline for immutability.

Validation, conversion and marshalling

In the use case Julien takes as an example, a JSON string is unmarshalled, conversion is applied (for dates - darn JSON) and then values are validated. He considers it weird that Bean Validation only does the latter and finds the flow wrong.

We kept this separation on purpose. The expert group vastly agreed that Bean Validation was not in the business of conversion and that these steps ((un)marshalling, conversion, validation) should be separated. Here is a key reason:

  • marshalling and conversions are only at the Java boundaries (Web frameworks, services enpoints, datastores enpoints, etc)
  • validation happens both at these boundaries but also at key lifecycle events within the Java boundaries

Conceptually, separating validation from the rest makes a lot of sense to enable technology composition and avoid repetitions - more on that latter point later. What is important is that for a given boundary, they are properly integrated. It's not unlike inlining in compilation.

Bean Validation is in the business of not being called

One key point in the design of Bean Validation is that it has been built to be integrated within a cohesive stack more than to be used individually.

JPA transparently calls Bean Validation on your objects at the right time. JSF calls Bean Validation transparently before populating the beans. JAX-RS calls Bean Validation on the inbound and outbound resources transparently. Note that these three technologies do address the marshalling and conversion problem already. That's their job.

So the key is that an application rarely has to call the Bean Validation API. I consider it a failure or an unfinished work when it has to.

If you need to explicitly validate some piece of data in a fire and forget way, Bean Validation might not be the best tool. And that's OK.

Back to the JSON case. When the JSON Binding spec will be out, we will have the same integration that is currently present in Play Unified parsing, conversion and validation API (it could also be done today in Jackson, I'm not sure what extension points this library offers). While the three layers marshalling / conversion / validation will be conceptually separated - and I'm sure will report different types of error for better tracking - the implementation will use some specific APIs of Bean Validation to inline validation with the unmarshalling and conversion. That API exists BTW, it is Validator.validateValue which lets you validate a value without creating the POJO associated. It is used by web frameworks already. Pretty much what Play Unified Validation API does.

As for JAX-B and XML binding, well let's say that there is an X in it and that it's not safe for children. More seriously, we have integration plans with mapping between the XSD and the Bean Validation constraints but we haven't got around to do it yet.

Bean Validation is in the business of declaring constraints once

From what I see of the Play Unified Validation API, you put the declaration / implementation of the validation next the marshalling logic. In other words you cannot share the constraint declaration between different marshalling operations or even between object transformations.

Bean Validation has been designed to let the user declare the constraints once and have them validated across the whole stack. From the web form and service entry points down to the database input/output and schema definition. Some even use our metadata facility to propagate the constraints up to the client side (JavaScript).

And that's future proof, when we add JSON-B support, boom the constraints already declared are used automatically and for free.

Conclusion

Bean Validation cannot be understood if you take it in isolation. It is useful and works in isolation for sure but it shines when integrated in a platform from top to bottom. We did and keep doing it in Java EE but we also make sure to keep our APIs and SPIs open for other platforms to do the same. Spring famously has been doing some of it.

Let's mention a few key features of Bean Validation that are not addressed at all in Play's approach:

  • i18n
  • constraint inheritance
  • method validation
  • custom and context sensitive programmatic error report
  • partially valid object graph - yes that's important in a mutable world
  • etc

Even a few of these would make Play's stuff much much different.

Now with this context properly set, if you read back Julien's complaints about Bean Validation (especially about they design), they fall one by one pretty much. I have to admit their composition approach is much nicer than Bean Validation's which relies on annotated annotations. That's the main thing I miss.

Design is choice. Without knowledge to the designer's choices, you can too easily misunderstand her design and motives. Julien has been comparing Apples with Oranges, which is ironic for someone working on one of the most type-safe language out there :o)

But it's good that more people take data validation holistically and seriously and step up the game. Less work for app developers, safer apps, happier customers. Judging by the location, I could even share some thoughts over a few rounds of whisky with Julien :)

Peace.

26 comments:
 
19. Jun 2014, 21:01 CET | Link
Julien Tournay

And the troll shall begin ;)

Bean Validation is like it is because that's the most natural way validation is expressed in the Java ecosystem

I agree. That's what I meant in my tweet. I admit it was poorly written though. Something similar to unified API would no only be hard to implement in Java (at least before java8 and lambdas, I suspect the type system would also be limiting), but would also be fucking stupid.

marshalling and conversions are only at the Java boundaries

That's true. I even say it in the post. It's also by far the most common use case, so it's natural to test it that way. writing a JEE app for the purpose of benchmarking would have been overkill.

Conceptually, separating validation from the rest makes a lot of sense to enable technology composition and avoid repetition

There's no repetition in the unified API. Composition is, IMHO a lot easier if you separate the concerns. From my post:

def between(lower: Int, upper: Int) = min(lower) |+| max(upper)

I'd be (honestly) curious to see the implementation of @Between using Bean Validation.

One key point in the design of Bean Validation is that it has been built to be integrated within a cohesive stack more than to be used individually.

So you either go for the full fledge Java solution, or pay the price ?

Bean Validation has been designed to let the user declare the constraints once and have them validated across the whole stack.

The unified API does this to. You can see the same Rules used on Json, Form, and even case class validation in the test suite and the documentation.

And that's future proof, when we add JSON-B support, boom the constraints already declared are used automatically and for free.

Anybody can add support for JSON-B to my API. The built-in and user-defined constraints are used automatically and for free.

Let's mention a few key features of Bean Validation that are not addressed at all in Play's approach:

You're the one trolling here ;) Of course, the unified API does all that.

Now with this context properly set, if you read back Julien's complaints about Bean Validation (especially about they design), they fall one by one pretty much. I ahve to admit their composition approach is much nicer than Bean Validation's which relies on annotated annotations. That's the main thing I miss.

Hopefully that's doable in Java8. I'd be very interested in discussing this if it's on the roadmap.

I could even share some thoughts over a few rounds of whisky with Julien :)

Let's do this!

PS: there's a little typo by the end of the text I ahve to admit their ;)

ReplyQuote
 
19. Jun 2014, 21:44 CET | Link

Hi Emmanuel,

I don't want to fuel the fire here but I'd like to cast some light on one aspect of Julien's post that really caught my eye. I am not really into the Java VS. Scala arguments and also agree with you that separation of concerns is a good thing to have. However, I never really got warm with the Bean Validation API, Julien briefly touches my core concern and I'd like to hear your thoughts on that. Here's the thing:

In my opinion, the Bean Validation JSR creates incentives to write poorly designed and unsafe code. Quite a bold statment you might say but let me elaborate what I mean with this. Let's consider a tiny example

class User {
	
	@NotNull @Email
	String emailAddress;
}

The first and foremost problem with this I think, is that with examples like these we grew a generation of developers that think email addresses are Strings. An email address is an email address is an email address.

What's basically happening here is that classes are basically degraded to structs: dumb data containers that get their logic and constraints applied from the outside, not take care of them themselves. This has serious consequences: a type basically is a set of constraints and guarantees. Everyone using that type will be able to know that these constraints are met.

If any client code now gets access to a User instance, it still can not be sure that bloody String really is an email address but hope that someone might have validated it some layer above. Now consider this alternative:

class EmailAddress {
	
	public EmailAddress(String value) {
	  // explicit null check
	  // regex validation
	}
}

class User {
	
	public User(EmailAddress email) {
	  // reject null email
	}
}

If you have code like this, you cannot create an instance of EmailAddress with an invalid String. EmailAddress can easily be unit tested to verify that. A client that gets access to a User instance can now be sure it doesn't run into a NullPointerException completely independent of any framework in place. The types do what they're supposed to do: enforce invariants.

I think what plays into this is that people usually don't distinguish between core business constraints and validation. Whether an arbitrary String is an email address is not validation, it's an intrisic trait of the concept email address.

I think it's perfectly fine to use JSR-303 to validate form backing objects, as their purpose is to accept invalid values, so that they can be checked and feedback be given to the users. You might want to report all errors back, instead of failing on the first value, fine. But if that's what you want, this is a strong indicated you need a dedicated abstraction for this.

I also totally don't get why Bean Validation integrates with lower layers like JPA. I can sort of understand that the starting point was: We have constraints here, we have constraints there, let's try to unify that. I still think trying to unify things that look similar at first glance but are driven by totally different aspects (UI VS. data storage) was not a good idea in the first place (similarly to trying to front everything that's remotely related to persistence with JPA ;).

If you really find your persistence provider detect invalid data, when you store it, doesn't that mean that invalid data has passed you application logic already? What does that say about your application code? It apparently didn't rely on the annotation defined constraints being met, right?

I really think JSR-303 is a neat way to verify form objects, easily present errors to users. But once these objects have passed this mitigation layer, they need to be transferred into types that enforce those rules in their design as otherwise you're back to data structures and functions and not types. I also think JSR-303 does not belong into the toolset to implement domain objects as they need to enforce constraints, communicate domain abstractions etc.

Summarizing all of this, I think JSR-303 has its place but that one is a rather narrow one which is in stark contrast with how it's usually presented.

What do you think?

 
19. Jun 2014, 22:34 CET | Link

Hi Julien,

When I mentioned technology composition I did not mean constraint composition. I meant being able say to use JAX-RS and Bean Validation for their orthogonal concerns but integrated without having to manually call one from the others.

I did skim through your blog post on the Play validation API and the documentation in the github repo but I did not find any reference to how it is separated from the JSON parsing, nor the declarative expression of the constraints nor i18n. Is there more complete doc somewhere? In the code sample I linked to the constraints are associated with the JSON attribute String. I don't know how you link that to the object property in the first place and iterate over the list of constraints.

The composition example would look like this

@Min
@Max
@Constraint
@Documented
@Target({ METHOD, FIELD, ANNOTATION_TYPE, CONSTRUCTOR, PARAMETER })
@Retention(RUNTIME)
public @interface Between {
    String message() default "{com.acme.Between}";
    Class<?>[] groups() default {};
    Class<? extends Payload>[] payload() default {};

    @OverridesAttribute(constraint=Min.class, name="value"),
    int min() default 0;

    @OverridesAttribute(constraint=Max.class, name="value"),
    int max() default Integer.MAX_VALUE;
}

I could also add @ReportAsSingleViolation so that an error generated by @Min and/or @Max is reported as a single error as if it was @Between

The particularly verbose thing is the attribute overriding on complex cases.

 
19. Jun 2014, 22:46 CET | Link

Hello Oliver,

I tend to agree with you that assuming a perfect language with zero type declaration and conversion friction, one would have written an EmailAddress type. In fact I often mention that constraints are in many ways a refinement of the type decorated. But in practice, I have almost never seen EmailAddress User.getEmailAddress() because it involves a cascade of converter implementations (different converters even for the same type) and that people give up. Plus EmailAddress is definitely less cross-platform than String.

If you do that in your projects kudos. Most developers including me are more lazy :)

I don't have the same experience than you on the constraints being different in the UI and in the DB. Assuming we consider the converter problem resolved, there is a core of the data constraints that are shared everywhere and there are constraints that might be tighter and more specific to the key entry point of your system. What Bean Validation offers is a way to share the common ones and to make sure all entry points (Web forms, REST, RMI, you name it) share the same tight data constraints. That is why we have groups and group inheritance to separate constraints in sub-groups literally.

One more thing, my database schemas have no null, type size and a few more expressed constraints. I'm happy to have them shared by the UI. I know schemas are old school for some but that's another debate ;)

 
20. Jun 2014, 00:22 CET | Link

Hi Oliver,

I also like the idea of using domain-specific types (and EmailAddress is a perfect example which I would do so all the time), but in my experience it creates problems when it comes to data binding in the UI, persisting attributes of these types, binding them to XML etc. You need custom converters for each domain-specific type then.

Plus, you still need to express the invariants applying to such a type somehow, typically you'd do this in an imperative manner. This makes it hard to impossible though to acess meta-data about these constraints and e.g. use that to create taylored UI fields etc. To me, the declarative approach of Bean Validation is one of its big advantages. Besides reading constraint meta-data this also allows to feed constraints in from different sources.

Also I'm wondering how well that approach scales. Would you really create a custom type for each and every property of your model, PersonName, ProductName, AddressLine1, AddressLine2 etc? If not, how would you express invariants to the remaining properties?

I'm not sure why Bean Validation makes types having constraints applied from the outside, not [taking] care of them themselves. On the contrary, using Bean Validation, a domain type itself specifies the invariants applying to its properties (again, some of these properties could be of specific value types such as EmailAddress).

Now its a different question, when these constraints are enforced. It would e.g. very well be possible to validate the constraints of a property when its value is set. Admittedly this could still be made easier, but also today you could implement an AspectJ aspect which reads the constraints via the meta-data API and enforces them upon value changes). Doing it upon flush time seems like a good trade-off to me, as validation errors typically cause the transaction to be rolled back, ensuring the persistent state of the model from being transitioned into an illegal state.

 
20. Jun 2014, 10:25 CET | Link
Julien Tournay

This is getting interesting :) I'll try to address all your questions.

I did not mean constraint composition. I meant being able say to use JAX-RS and Bean Validation for their orthogonal concerns but integrated without having to manually call one from the others.

The Rule (Rule is the class performing validation in the unified API, code), is basically a pure (no side effect) function A => Validation[Errors, B] where A is the type of the input, and B the type of the output if the validation succeed.

IMO a pure function is hard to beat when it comes to integrating with other libraries. You seem to be stating that manually calling the validation is a bad idea. I don't think that's true, quite the contrary actually. When the validation is called manually, it's very easy to understand, reason about, and to test.

I did not find any reference to how it is separated from the JSON parsing

This file contains the totality of the validation support for JSON. As you can see, no reference to a parser.

I do think that the parser should be wrapped in a Rule thought. The complete workflow from a Json String to a case class would be

(parser: Rule[String, JsValue]) compose (mapping: Rules[JsValue, SomeClass])

That way a parsing error would result in a validation Failure, with a proper error message.

nor declarative expression of the constraints, nor i18n

from the blog post Scala is faster than Java:

def checkCase(caseMode: CaseMode) = validateWith("constraints.checkcase") { (s: String) =>
   if (caseMode == CaseMode.UPPER)
      s == s.toUpperCase
   else
      s == s.toLowerCase
}

So validateWith id a just a helper to create a Rule[A, A]. As you can see, it's very easy. You've also guessed that "constraints.checkcase" is an i18n key, and not an error message. It's not the job of the validation API to resolve that to a complete error message. It's trivially done in play: @message("constraints.checkcase"). Again, you'll end up just doing function composition.

The generic, built-in constraints are defined here.

Is there more complete doc somewhere

YES :) Not hosted (yet) thought. I may publish it on my website. Have a look at the documentation folder . You can also checkout the source code, which is fairly small in terms of LOC, and pretty straightforward to understand.

In the code sample I linked to the constraints are associated with the JSON attribute String. I don't know how you link that to the object property in the first place

Not sure I understand what you mean here. For the link between a validated json, and a case class instance, it's function application again.

Simply put, rule1 ~ rule2 gives you a higher-order function. all you have to do is to pass a function matching the parameters. If you're mapping to a case class, the apply method from the class companion object is generally used.

for example (rule1 ~ rule2)(SomeClass.apply _)

and iterate over the list of constraints.

If I get your question, the answer is: I don't. The Rule itself knows what to do. I just apply it. Generally, you'll have something along the line of:

(extract: Rule[JsValue, JsValue]) compose (validate: Rule[JsValue, CaseClass])

where extract extracts a subtree in the Json, and validate knows how to deal with it. Once again, function composition in action. (__ \ "foo").read(min(0)) does just that behind the scene.

Hope that clarifies it. Your feedback is very interesting to me. It shows were I should improve the doc. Thanks for the constructive arguments.

 
20. Jun 2014, 10:58 CET | Link
You seem to be stating that manually calling the validation is a bad idea. I don't think that's true, quite the contrary actually. When the validation is called manually, it's very easy to understand, reason about, and to test.

Basically yes. Having a framework whose goal is let users do the validation calls explicitly is definitely not what I wanted or want out of Bean Validation. I don't think users should be in the business of remembering exactly when to call validation. It's like security. Also (particularly in Java), you can't expect objects not to mutate in their life time. So validating at key lifecycle events is important.

Bean Validation let the user express constraints on the pivot data structure (the POJO specifically) and not the inbound / outbound data structure. That opens up the ability to share and enforce the constraints across edge points in the system (say an app or a set of apps) and by declaratively binding the constraints to the data structure you open up the ability to list them (that's Bean Validation's metadata API job) and reproduce / transfer them beyond the system.

In your blog example, I don't know that Car.seatCount is always above 2. I know that when I read a JSON representation of Car in this specific spot of my app, Car.seatCount is above 2. But it's possible that your user interface will let the user set a 1 man car.

To me that's a problem.

 
20. Jun 2014, 11:09 CET | Link

That's a key difference between Scala and Java here.

In Scala, if the object exists, it is valid. You may or may not enforce constraints at the type level, but if I get an instance of a Car, I'm not going to revalidate it. It's there, therefore it's valid.

A trivial example of this is the absence of null checks in Scala code. Of course you may call a Java API possibly returning nulls. In that case you'll just represent the possible absence of value in the type system using Option. The field is therefore always a valid value.

 
20. Jun 2014, 11:12 CET | Link
In your blog example, I don't know that Car.seatCount is always above 2. I know that when I read a JSON representation of Car in this specific spot of my app, Car.seatCount is above 2. But it's possible that your user interface will let the user set a 1 man car.

The bean validation API does not enforce that either. I can easily create an invalid instance. The only way to enforce the constraint is to encode it in the type system.

 
20. Jun 2014, 11:27 CET | Link
Julien Tournay wrote on Jun 20, 2014 05:09:
That's a key difference between Scala and Java here. In Scala, if the object exists, it is valid. You may or may not enforce constraints at the type level, but if I get an instance of a Car, I'm not going to revalidate it. It's there, therefore it's valid.

I don't think this is a language difference. You can write code that behaves like this in Java as well (and you don't have to sell me on the idea that Scala is much more concise in that regard, Lombok helps a bit).

I think people understandably take a step back from that (and Emmanuel's response to my question also points in that direction) as - by all the benefits that has in the application layer - it complicates e.g. the reporting of errors for user interfaces. With application code you want to fail fast, prevent errors from being able to happen. On the remote parts of the system (UI, a REST service) you want to collect errors to be able to present all of them in one go. These are completely different requirements.

So to achieve the latter, people start weakening the application code they have to accommodate the requirements coming from an (should be) unrelated area. I'd argue this is the wrong way to go as your application code will get more error prone and unsafe. Yes, the solution IMO is to have dedicated types for each use case which is more work (how much is to be debated) but I don't think you really solve a problem by trying to hide it and create incentives for people to accept compromises in their domain model design to achieve a peripheral task.

But - to come back to your original statement - I don't think this is a language thing at all. How would you tackle the requirement to collect and display a set of binding errors to the user in Scala? With an additional, more lenient type as well, wouldn't you?

Btw. great discussion, thank you Emmanuel and Julien!

 
20. Jun 2014, 11:55 CET | Link
The bean validation API does not enforce that either. I can easily create an invalid instance. The only way to enforce the constraint is to encode it in the type system.

Right, Bean Validation itself doesn't enforce the constraints at any point. But as outlined above, you can achieve the enforcement of constraints during object creation or value changes e.g. using AspectJ. In that sense, the constraint annotations are just an ammendmend to the type system.

 
20. Jun 2014, 12:13 CET | Link

Hi Olivier.

I think we totally agree :)

I don't think this is a language difference.

Not a language difference per see, but a difference in their community. Scala devs tend to trust the types (null is the obvious example), while Java devs may not.

With application code you want to fail fast, prevent errors from being able to happen.

Yes, definitely. Ideally you enforce them in the types, so they can't even happen. Of course it may require a more powerful type system than what Java and Scala have to offer. In that case you could for example just thrown an exception in the constructor. You can easily reuse the validation you already defined for that purpose.

 
20. Jun 2014, 12:21 CET | Link
But - to come back to your original statement - I don't think this is a language thing at all. How would you tackle the requirement to collect and display a set of binding errors to the user in Scala? With an additional, more lenient type as well, wouldn't you?

Exactly what the unified API does :) Example from the tests:

contactValidation.validate(invalidJson) mustEqual(Failure(Seq(
    (Path \ "informations" \ 0 \ "label") -> Seq(ValidationError("error.required")))))
 
20. Jun 2014, 13:07 CET | Link
In your blog example, I don't know that Car.seatCount is always above 2. I know that when I read a JSON representation of Car in this specific spot of my app, Car.seatCount is above 2. But it's possible that your user interface will let the user set a 1 man car. - Emmanuel
The bean validation API does not enforce that either. I can easily create an invalid instance. The only way to enforce the constraint is to encode it in the type system. - Julien

We don't enforce it at the language level. But we do enforce it at the Java EE platform level (that's the goal anyways) and any platform embracing the same logic we did for Java EE can enforce this.

And as Gunnar mentioned, you can go all crazy and unleash The AspectJ. I personally don't think that's a good idea to envorce it everywhere. Only at the system boundaries and when objects are transformed.

 
20. Jun 2014, 13:17 CET | Link
That's a key difference between Scala and Java here. In Scala, if the object exists, it is valid. You may or may not enforce constraints at the type level, but if I get an instance of a Car, I'm not going to revalidate it. It's there, therefore it's valid. A trivial example of this is the absence of null checks in Scala code. Of course you may call a Java API possibly returning nulls. In that case you'll just represent the possible absence of value in the type system using Option. The field is therefore always a valid value.

I have to admit, that's a tough one to swallow for me. In real life, things are entered incorrectly and things are partially valid. And from a business standpoint it is fine as long as these gets corrected before a given step in a workflow. So denying the right for a data to be represented in Scala because it's not (yet) valid, is a hell of a sadistic rule :) It comes back to what I articulated earlier with different groups of rules. An object can be valid by one set of rules and invalid by another.

By the way, you don't enforce that a Car instance is always valid in your example. You enforce that it is always valid if it is created from that very specific code. I can take Car and instantiate a one seated one. So by your JSON input rules, my manually instantiated Car instance is wrong. I gather that in your approach, you don't care. And in my approach I will care at specific and distributed points of my system (lifecycle).

 
20. Jun 2014, 13:53 CET | Link
I have to admit, that's a tough one to swallow for me. In real life, things are entered incorrectly and things are partially valid. And from a business standpoint it is fine as long as these gets corrected before a given step in a workflow.

You typically validate at the boundaries of the system. Yes I don't think you should instantiate a class if you're not confident in the validity of the data your putting in it. It's perfectly fine to continue working with a different data structure until then. That's why I advocate validation before instantiation. If you need partially valid data, use different types.

In the case of parsing (Json, XML, whatever), it could well be impossible to create an instance anyway (field w/ invalid type).

An object can be valid by one set of rules and invalid by another

I'm delighted to read that argument. It's an excellent example of why the validation should not be defined inside the class.

By the way, you don't enforce that a Car instance is always valid in your example. You enforce that it is always valid if it is created from that very specific code. I can take Car and instantiate a one seated one. So by your JSON input rules, my manually instantiated Car instance is wrong. I gather that in your approach, you don't care. And in my approach I will care at specific and distributed points of my system (lifecycle).

I made this point earlier. You may or may not represent the constraint in the type system. It's, like always a cost vs benefits equilibrium.

If it's critical to be sure that a Car has at list 2 seats, you make sure that a one seated car is impossible to create. The type system is the best place to do so, as it can formally prove that property. Of course, it will come at a price. Quite often, you just validate at the boundaries (danger zones), and assume it's been validated everywhere else, which is the good enough solution. The APIs are similar in this aspect.

 
20. Jun 2014, 14:06 CET | Link
Basically yes. Having a framework whose goal is let users do the validation calls explicitly is definitely not what I wanted or want out of Bean Validation. I don't think users should be in the business of remembering exactly when to call validation. It's like security. Also (particularly in Java), you can't expect objects not to mutate in their life time. So validating at key lifecycle events is important.

The Bean validation API alone does not offer that advantage. It only work in the context of a JEE app.

 
20. Jun 2014, 14:29 CET | Link
An object can be valid by one set of rules and invalid by another - Emmanuel
I'm delighted to read that argument. It's an excellent example of why the validation should not be defined inside the class. - Julien

We do have a mechanism in Bean Validation to express that in the class: groups.

Basically yes. Having a framework whose goal is let users do the validation calls explicitly is definitely not what I wanted or want out of Bean Validation. I don't think users should be in the business of remembering exactly when to call validation. It's like security. Also (particularly in Java), you can't expect objects not to mutate in their life time. So validating at key lifecycle events is important. - Emmanuel
The Bean validation API alone does not offer that advantage. It only work in the context of a JEE app. - Julien

Sure. But we did not design the API in a vacuum. We designed it to be used with a platform.

 
20. Jun 2014, 14:57 CET | Link

Hey,

The Bean validation API alone does not offer that advantage. It only work in the context of a JEE app.

Bean Validation is not tied to Java EE. You can use it in all sorts of environments and containers, be it Java EE, Spring, RCP applications, Android etc. It's only that Java EE (or its contained specs) already defines many integration points (JAX-RS, JSF, JCA, JPA), whereas in other cases some lines of integration code may have to be written if BV integration has not yet been foreseen by the involved frameworks.

--Gunnar

 
20. Jun 2014, 15:05 CET | Link

Hi Gunnar.

That's my point. It works because some environment have integrated the API. Play does the same with the Json API. The design of the API itself has nothing to do with that. Of course, it's still a good thing.

 
24. Jun 2014, 10:14 CET | Link

I think this discussion has rolled a bit on a side track. I've used bean validation quite a lot and the issue has never actually been in the constraints or how the validation is applied in the underlying system. The real problem relies in reusability of the domain model.

In most cases your API supports different use cases for same domain model. Partial updates, adding items to composing model inserting, deleting and so on. All the time it is the very same domain model, but the validation scheme differs in each case. I've tried to model such scenarios with bean validation and it totally fails. You will end up with different domain classes for each use case having different set of constraints applied. It doesn't matter if you have localization or reusable constraints if you have 10 domain models representing same thing. And this is the real issue with bean validation.

Validation should be handled with some DSL implementation. Julien's Play Unified validation API handles this quite well. You have incoming JSON or something else, then you apply the required conversion and use case specific validation. This way you will always have your one Car domain model which is valid for the given use case. No gazillion classes providing a custom validation scheme for the given case.

I think this issue should be concerned in the development of the Bean Validation spec. In most cases the bean validation is unusable due to this reason and many have built custom DSLs (maybe using Bean Validator beneath).

 
24. Jun 2014, 11:16 CET | Link
mandubian
I agree totally and this is exactly what we have been talking about in our talk with @skaalf/Julien at Pingconf last January: http://www.ping-conf.com/#julientournay

Pascal

 
24. Jun 2014, 11:20 CET | Link

And before someone replies that Bean Validation has groups that you can mix and match... Groups are not a real solution because they are not properly supported by the underlying JavaEE implementations and also make your domain classes hard to reason about.

A quote from Jersey documentation:

Jersey does not support, and doesn't validate, constraints placed on constructors and Bean Validation groups (only Default group is supported at the moment).
 
29. Aug 2014, 06:24 CET | Link
obat herbal amazon plus

Your post had provided me with another point of view on this topic. I had simply no notion that things may work in this manner as well. Thank you for sharing your opinion. obat herbal amazon plus

 
30. Aug 2014, 03:49 CET | Link
polo

Best friend successive good years, Gucci Shoes Factory, winter spent in Sanya, Monster Beats Outlet, so winter home, seems to be nowhere in sight, North Jackets Outlet, both a little rusty, Nike Air Max Shoes, but also very cordial, Burberry Outlet Online, forward to tonight, feather-like snow, MCM Bags Outlet, drifts underground, Coach Black Friday, looking at that the floc, Louis Vuitton Outlet, fly like snow Ling, Michael Kors Outlet Online, eloquent, Polo Outlet Online, flying in the sky, Cheap Canada Goose, limp walk downs, strange things, Marc Jacobs Outlet Online, I float thinking lianpian, North Outlet Online, poetic heart wenqing, not help surging surging, Coach Factory Outlet, so I borrowed the winter, Ralph Lauren UK, snows research ink, Longchamp Sacs Sortie, into the mind of inactivity, Michael Kors Outlet, the white snow everywhere, Monster Beats By Dre, wrapped in some different, Ralph Lauren Outlet, kind of scenery, Oakley Sunglaases Outlet, wherever the wind in the faces, of the people, Michael Kors Bags Outlet, who are very Huddled, could not help.

 
26. Sep 2014, 11:21 CET | Link
Khi

Thank you for the sensible critique. Me and my neighbor were just preparing to do a little research about this. We got a grab a book from our local library but I think I learned more from this post. I’m very glad to see such excellent info being shared freely out there. http://servizioindicizzazione.com indicizzazione google posizionamento google servizio indicizzazione seo google penalizzazione google prima pagina google posizione google guest post

Post Comment