In this decision, the European Patent Office refused to grant a software patent on the concept of filtering and blocking emails if they contain a certain number of inappropriate URLs. Here are the practical takeaways of the decision T 2363/16 (URL based email filtering/MICROSOFT TECHNOLOGY LICENSING) of 6.12.2019 of Technical Board of Appeal 3.5.07:
This European patent application generally relates to the filtering of electronic communications and websites based on the appropriateness of their content. The purpose is to block access to undesirable content, e.g. unsolicited commercial offers or content inappropriate for children.
The invention involves parsing a received email to identify URLs (uniform resource locators) within the email and looking up a rating for each of the identified URLs in a database. If a sufficient number, rating or percentage of the URLs are categorised as “inappropriate”, the electronic communication may be blocked.
The ratings are obtained from a database of categorised URLs, which contains URLs, representing public web pages, and category labels indicating membership in “inappropriate” categories, including pornography, hate speech, mature content and drugs. The database contents are served up online through custom lookup servers called category name service (CNS) servers.
Here is how the invention is defined in claim 1:
Claim 1 (main request)An email filtering method for a computer system (100, 200), the method comprising:
receiving (402) an email message;
parsing (404, 604) the email message;
identifying (406) URLs within the parsed email message;
transmitting the identified URLs to a category name service, CNS, server (114);
receiving, from the CNS server, a rating of each identified URL as being appropriate or inappropriate; and
applying a policy including an allow/block logic (118) which determines to permit or inhibit access to the email message,
wherein applying the policy includes determining whether the number of inappropriate URLs of the identified URLs exceeds a threshold greater than zero, and if so, inhibiting access to the email message.
Is it technical?
The board started its inventive-step analysis from a document describing the anti-spam and web filtering products called OrangeBox Mail and OrangeBox Web Home. This document disclosed an email filtering method including steps of receiving and parsing an email message, identifying URLs in the parsed email message, transmitting the URLs to a CNS and receiving a rating as defined in claim 1.
The difference between the invention and the prior art method was thus in the criteria that are taken into account to decide whether to inhibit access to an email. The prior art disclosed checking URLs in emails against a URL filtering database but not the specific check specified in claim 1.
The board therefore found that claim 1 differed from the prior art in that the policy establishes that an email is to be blocked if the “number of inappropriate URLs exceeds a threshold greater than zero”.
Here is how the board summarized the appellant’s arguments in favor of a non-obvious technical contribution of this difference:
In the grounds of appeal, the appellant argued that, contrary to the contested decision’s reasoning, the invention did not apply a certain (non-technical) standard of morality but instead provided a technical teaching of how to make use of an appropriate/inappropriate rating received from a server in an improved and non-obvious way. What would have been obvious from document D11 would be to block an email if a URL link contained in the email were listed in the URL filtering database. The invention went beyond this obvious solution by determining whether the number of inappropriate URLs exceeded a threshold greater than zero.
At the oral proceedings, the appellant further argued that the distinguishing feature did not relate to a non-technical user requirement regarding how much inappropriate content was tolerated, it rather taught how to perform the appropriateness test in a technically advantageous manner. The users did not care about how the determination was made, they merely wanted to be protected. The decision of how to perform the content-appropriateness test was not made by an administrator but by the technically skilled person. The test of distinguishing feature (i) had the technical advantage of being surprisingly simple and was inventive. At the priority date 16 years ago, such an improvement would not have been obvious.
However, the board took another point of view:
However, classification of messages as a function of their content is not technical per se (T 22/12 of 16 November 2015, reasons 2.2). In the present case, the classification criteria regarding which emails should be blocked are determined by the user of the system based on non-technical considerations regarding which emails the user does not want to receive. Distinguishing feature (i) therefore merely reflects a non-technical criterion or policy according to which an email is acceptable if it refers to a number of inappropriate web pages that does not exceed a threshold greater than zero. Implementing this policy by determining whether the number of inappropriate URLs exceeds such a threshold, is an obvious way to implement the non-technical criterion in the method of document D11, which already uses URLs in a similar way.
As a result, the board decided that claim 1 lacked an inventive step and dismissed the appeal.
You can read the whole decision here: T 2363/16 (URL based email filtering/MICROSOFT TECHNOLOGY LICENSING) of 6.12.2019