New national statistics report shows over 5m fraud and computer misuse offences in 2016

UK statistics cyber crime

New figures from the Office of National Statistic’s ‘Crime in England and Wales: year ending Sept 2016’ report, showed an estimated 6.2 million incidents of crime in 2016.

In addition to covering a wide variety of crimes, such as burglary and theft of vehicles, new for the 2016 results is the inclusion of statistics on fraud and computer misuse.

There were 3.6 million fraud and 2.0 million computer misuse offences for the first full year in which such questions have been included in the CSEW.

“The inclusion of these new offences yields a new headline estimate of 11.8 million incidents of crime covered by the survey, but it will be another year before a comparable time series is available,” the report stated.

“The new fraud and computer misuse estimation of 5.6 million offences highlights the challenge forces face to be better equipped to fight cyber enabled crime and the need for all of us to better protect ourselves,” said Andy Lea, Head of Policing at KPMG. “These figures also show the difficult decisions forces will need to make when prioritising their use of resources.”

Fraud and computer misuse details

The survey results show that adults aged 16 and over experienced an estimated 3.6 million incidents of fraud, with just over half of these (53%; 1.9 million incidents) being cyber-related.

The CSEW classifies a crime as being ‘cyber-related’ when the internet or any type of online activity was related to any aspect of the offence.

Key findings include:

  • The most common types of fraud experienced were “Bank and credit account” fraud (2.5 million incidents; 68% of the total).
  • “Non-investment” fraud – such as fraud related to online shopping or fraudulent computer service calls (0.9 million incidents; 26% of the total) was the second highest.
  • There were an estimated 2.0 million computer misuse incidents reported.
  • Around two-thirds (66%; 1.3 million incidents) of the computer misuse incidents were computer virus-related and around one-third (34%; 0.7 million incidents) were related to unauthorised access to personal information (including hacking).
cybercrime statistics
CSEW fraud and computer misuse – numbers of incidents for year ending September 2016 (Experimental Statistics).

Financial losses to victims

The report shows that, although a high number of cyber crimes were reported, in just under two-thirds of incidents resulting in financial loss, the victim lost less than £250 (61%).

Two-thirds of fraud incidents involved initial loss of money or goods to the victim (66%), independent of any reimbursement received. This equates to an estimated 2.4 million offences, compared with 1.2 million incidents of fraud involving no loss.

Incidents of bank and credit account fraud were more likely to result in initial loss to the victim (73%, equivalent to 1.8 million) than other types of fraud.

In the majority of these incidents, the victim received a full reimbursement, typically from their financial services provider (83%).

Traditional crime blurs into virtual crime

“We see a blurring between traditional, real world crime and virtual crime; criminals are happy to blend their techniques across the two and so ‘cyber’ can no longer be seen as a separate compartment of crime,” said David Emm, Principal Security Researcher at Kaspersky Lab.

“It is important to note that an accurate year-on-year comparison from the ONS, to demonstrate the growth of fraudulent cybercrime, will not be possible until January 2018. However, we agree that bank and credit account fraud is one of the most problematic areas with the continuing rise of e-commerce,” Emm continued.

 

[Source:- softwaretestingnews]

 

Why evolution is better than revolution in product design

featured_design

Digital products will always need to be redesigned. Styles progress, hardware technologies advance, and development possibilities are ever-increasing. Just in the past year the potential for implementing microinteractions, and processor-intensive animations and graphics, has come along at a fair pace. Product teams are continuously looking to iterate and stay ahead of or pass the competition. This is ever important in furthering the design and development industries, and delivering to the consumer the very best product available.

The process of redesigning is not always so straightforward. There are times when teams and individuals have to decide whether to redesign from the ground up, or iterate on the current product. In this article we are going to look at both options and analyze just why redesigning from scratch should be avoided in the majority of cases.

REDESIGNING FROM SCRATCH

To begin, redesigning from scratch should not always be avoided. On occasion, a company can inherit a product simply for the user base, domain name, or because they see the potential to completely re-engineer the product from the ground up, into something completely different.

One example of a product that completely redesigned from the ground up is Bebo. What was once a fast-growing social network has since become multiple new products as a result of complete redesigns. In its latest relaunch, it has been developed into a messaging app, somewhat reminiscent of Slack.

The issue with redesigning from scratch, is you pose the risk of alienating users. In certain cases, the product can have such underperforming design and UX, that it leaves this as the only appropriate course of action. The issue is when products are redesigned for little reason other than for change for its own sake.

It’s important to ask two questions when pondering this decision:

  • Does my vision for the product clash considerably with the current design and framework?
  • Is the current product posing multiple substantial design and UX issues for users?

If the answer to either is yes, then this may well be the most appropriate course.

If you believe a redesign may cause a loss of users, answering yes to either should override any worries you have of this being the case. Sometimes, and only sometimes, a small proportion of the existing user base who are entirely opposed to change has to be discounted in order to move the product forward. You just have to be sure you are truly moving the product forward with a complete redesign—there has to be clear underlying reasons such as above.

REDESIGNING IN ITERATIONS

For most cases, this should be the route to take. By continuously iterating on a product, you avoid alienating the current user base by by slowly but surely introducing new UI and UX enhancements with each version. This is a lot easier to digest for users, and typically helps avoid having them move to competitors. It also allows for the removal of a feature if proven not to be effective or useful for new and existing users.

Redesigning in iterations can also often result in the best possible product. When you are constantly redesigning from the ground up, it eliminates the positive effects of stepwise refinement.

Take Google’s core search product, for example. I’d argue they have never redesigned completely, and instead continuously iterated over multiple decades. With Google, they have an incredibly complex product, but a simple interface, and have iterated upon this in small steps to the point now where the product is extremely refined, powerful, and easy to use.

Another such example is InVision. A few years ago, they could have completely wiped the design which was looking tired and outdated. Instead of building something new with the latest short-term style trends, they chose to iterate on the current version one step at a time with the outlook of creating one of the finest design industry tools. All the while, they kept existing users satisfied by not overhauling every feature and layout.

In the above examples, you can see just how the product has progressed from something very dated, to a cutting-edge, industry leading product design—all through continuous iterating on the features, layout, and styles.

This approach also excludes the issue of overhauling a design every time the design team or lead is changed. It provides a consistent approach over long periods of time, and avoids individual designs and styles making their mark at the users’ expense.

Next time you are working on a design, ask yourself: should I really redesign this product from scratch, or can we achieve better long-term results with stepwise refinement?

 

 

[Source:- webdesignerdepot]

Update to Final Fantasy 15’s Controversial Chapter 13 Hits March 28

Update to Final Fantasy 15’s Controversial Chapter 13 Hits March 28

Square Enix confirms that the previously announced tweaks to a particularly divisive chapter of Final Fantasy 15 will be distributed via an update on March 28.

Several weeks after its release, it seems that the consensus is that Final Fantasy 15 is a good game — and its sales performance certainly backs up its critical reception. However, there’s one section that many critics and fans didn’t enjoy too much; Chapter 13.

If you’re still working your way through Final Fantasy 15, be warned, as it’s difficult to discuss why Square Enix is making changes to the sequence without straying into spoiler territory. Chapter 13 is something of a departure from the bulk of the game, and many would argue that it detracts from the overall experience.

This particular section of Final Fantasy 15 sees protagonist Noctis separated from his party, trapped in a maze, and stripped of many of his most useful abilities. Rather than the role-playing road trip they were previously enjoying, players are forced to stealthily evade robots in a labyrinth of darkened corridors.

The game’s director, Hajime Tabata, has stated that Chapter 13 was intended to offer a jarring contrast to the rest of the game. However, he’s also admitted that his ideas weren’t executed perfectly, confirming that plans were in motion to tweak this particular section in a future update.

“The amount of stress inflicted on the player while running through this chapter was greater than we had anticipated,” Tabata explained while speaking to US Gamer earlier this month. “We believe resolving this issue will naturally lead to a better gameplay experience.”

Now, we now exactly when the update to Chapter 13 is scheduled to be released. During an Active Time Report livestream that took place yesterday, Square Enix confirmed that the update will drop on March 28, according to a report from Gematsu.

The company also revealed a major component of the changes being made to Chapter 13. Apparently, there will be a short section where Gladiolus Amicitia serves as the playable character — although it’s not completely clear whether this is actually part of Chapter 13, as the update is set to bring “enhancements” to various sections of the final stages of Final Fantasy 15.

It’s good to see Tabata and Square Enix responding to one of the sections of Final Fantasy 15 that has garnered the most criticism. Here’s hoping that next month’s update can remove some of its frustrating aspects while still retaining its larger role in the game’s narrative.

 

 
[Source:- GR]

 

Samsung might tease the Galaxy S8 in a short video at MWC

As already reported, the Galaxy S8 won’t be announced at Mobile World Congress 2017, as it will be released a bit later than usual this year. However, it looks like Samsung may have decided to still give us a glimpse of the upcoming flagship during its event in Barcelona.

According to a report from The Korea Herald, the tech giant will tease the Galaxy S8 in a one minute trailer at MWC. The video will be played at a press event on February 26, where Samsung will announce the Galaxy Tab S3. Hopefully, the short video will give us more info about the device, which will likely be released in mid-April.

As with every year, there have been tons of rumors going around about Samsung’s new flagship devices. The Galaxy S8 and Galaxy S8 Plus are expected to be the first smartphones powered by the Snapdragon 835 processor and will come with Samsung’s own digital assistant called Bixby.

They will both sport much thinner bezels around the screen and ditch the home button. This means that the fingerprint scanner will be moved to the back of the devices, which can be seen on the recent images that have leaked.

There are plenty of other interesting rumors regarding the smartphones. To learn more, check out our Galaxy S8 rumors post.

 

[Source:- androidauthority]

 

 

Software innovation in healthcare round up

software healthcare

The healthcare sector is benefitting immensely from going digital. Recent eHealth announcements show how cloud-based solutions and collaborative platforms are pushing future medical discoveries, cross-border healthcare, and patient care into the
21st century.

Cloud-based open source platform inspires genetics research collaboration

Writing in Wired, Commis­sioner of the US Food and Drug Administration, Robert M. Califf, MD, discusses a new open source R&D portal called precisionFDA, where “nearly
2100 individual members from 568 organi­sations are sharing and comparing data, software tools, and testing methodologies on the site.”

Designed to spur collaboration among next-generation sequencing (NGS), an advanced DNA testing process, researchers, the cloud-based portal will accelerate NGS technology development, increase collaboration, and ensure the medical community can develop data collectively rather than indi­vidually, reducing the need for duplicative clinical studies.

Another benefit “besides helping to accelerate the development of NGS technology, [is] it puts the agency at the centre of ongoing discussions, allowing us to stay up to date on issues and breakthroughs in the field,” Califf wrote.

NGS tech­nology will be able to chart almost all of a person’s genome in a single run, much quicker and more economical than current methods. Genetic markers for diseases can help inform prevention efforts and improve diagnoses.

Common IT platform connects rare diseases specialists across the EU

In similar news of online collaboration, Dublin-based software company OpenApp has announced its software will aid 24 European Reference Networks to connect over
370 hospitals and nearly 1000 specialist rare disease centres across 25 EU Member States.

The Irish eHealth firm will develop and manage a common IT platform to support the ERNs.

The platform will allow teams of multi-disciplinary medical specialists to meet as a virtual clinical board. Some 30 million patients across the EU suffer from rare diseases, and will now be able to benefit from specialist diagnostics and suggest treatments wherever they are in Europe.

“Seeing this embedded in a pan-European effort to address rare diseases is exciting and will revolutionise equity of access to high-quality care.” commented Professor Alan Irvine, Crumlin Children’s Hospital Ireland.

Investments in diabetes management software

Atlanta’s Grady Health System, operator of Atlanta’s Grady Memorial Hospital and numerous health centres, has begun implementing Glytec’s eGlycemic Management System® (eGMS), a personalised diabetes therapy management solution.

The diabetes management software system is made up of a set of modules that helps healthcare professionals better regulate insulin dosing for the care of patients with acute diabetes, hypoglycemia and hyperglycemia.

eGMS is integrated with Grady’s Epic electronic medical record (EMR), allowing users direct access from a patient’s chart without the need for a separate login.

Also included in the software systems is a surveillance solution, which the hospital relies on for rapid identification of patients in need of insulin therapy. GlucoSurveillance® interfaces with Grady’s laboratory information system to perform continuous real-time surveillance of blood glucose values, flagging patients who meet
pre-defined criteria for persistent hyperglycemia.

“Our rate of hypoglycemia among critically ill patients was not at a level we were comfortable with,” said Dr. Robert Jansen, Grady’s Chief Medical Officer and Chief of Staff. “As we worked to improve our care model, the clinical research conducted by
Dr. Umpierrez using the Glytec system showed that the system has real merit. We were unanimous in our decision to use eGMS.”

 

 

[Source:- softwaretestingnews]

 

Meet the man behind Comic Sans

featured_comicsans

We’ve all seen Comic Sans; the typeface that’s both loathed and (secretly) admired—some people have even dedicated a website to educating people about the very limited use cases of comic sans! It’s made a huge global impact in the decades since its original use-case, Microsoft’s Bob program.

By 1996, it was popular enough to be preinstalled on every Macintosh computer that rolled off the assembly line, but how exactly did this font come to be? What mind was behind this ultra-kiddy font?

Check out this video and meet Vincent Connare, for all intents and purposes, the father of Comic Sans. When he came up with the idea for the font, he looked through stacks and stacks of comic books, which is probably unsurprising. In particular, he leafed through DC Comics’ Batman and Watchmen stories…and he was inspired!

Commissioned by Microsoft to create a font, Connare came up with a font that resembled the comic lettering he’d noticed in the stories’ speech and thought bubbles.

Unfortunately for Connare, his boss at the time, one Robert Norton, disliked his comic-inspired typeface. Norton thought the face ought to be more “typographic” and had something against its overall quirkiness and weirdness. Connare persisted and defended Comic Sans’ ability to stand out, as it looked markedly different than anything people would look at in their school textbooks. Even so, Comic Sans didn’t make it into Bob’s final release, but ultimately, Connare had the last laugh.

Today, comic sans is freely viewable all over the world! While the font is definitely overexposed, Connare does nonetheless get a huge amount of gratification from all the places he’s seen the font when he travels. Whether it’s in neon signs for small businesses or on war memorials and packages of bread, Connare is vindicated.

he has no regrets surrounding the font

To hear Connare put it, he has no regrets surrounding the font. On the contrary, while he freely admits that comic sans definitely isn’t one of the better forms of art, it is still, conceptually, the very best thing that he’s ever accomplished in his career, in all likelihood.

All told, not a bad outcome for a guy who worked as a typographic engineer at Microsoft and, arguably, whose most famous font never saw the light of day in the original Microsoft program for which it was intended. Interestingly, Connare was also a contributor for other famous faces such as Trebuchet.

To understand why he came up with comic sans in the first place, we have to understand his philosophy on art. Good art was art that was noticed while bad art was art that no one noticed and, therefore, was a failure.

 

 

 

[Source:- webdesignerdepot]

Ghost Recon: Wildlands Closed Beta Gameplay Details

Ghost Recon: Wildlands Closed Beta Gameplay Details

With the next closed beta kicking off tomorrow, Ubisoft reveals what sort of content and map size players will be able to play through in Ghost Recon: Wildlands this weekend.

Ubisoft is firing on all cylinders lately as the company just wrapped up another successful closed beta session for its upcoming melee game, For Honor, along with getting ready to send out the final known expansion for The Division known as Last Stand. Rainbow Six Siege is also getting ready to launch its first season 2 content with Operation Velvet Shell, which introduces two new Spanish operators and a new map. Next on the to do list is Ghost Recon: Wildlands, as another closed beta test which is scheduled to kick off tomorrow and run through the weekend.

Thanks to a recent guide from Ubisoft, fans now have a good idea on what kind of content will be waiting for them starting tomorrow. Even though the map in Ghost Recon: Wildlands is quite massive with 21 regions, only the Itacua province will be available, though even this region appears to be pretty sizable. Every main story mission as well as all side activities will be unlocked in this region, and players have the option to play through them solo alongside three AI bots or with up to three human co-op players.

In addition to checking out the gameplay and missions, players also have full access to both the player character customization tools and the Gunsmith. Though briefly detailed last year in a trailer, Gunsmith is essentially the tool that enables players to look at and customize their currently unlocked set of weapons in the game. With over 50 customizable guns in the game, players can use Gunsmith to add or remove attachments, switch weapons, and further customize them with sprays and other items as they see fit.

Most fans will also be happy to note that even though this is a closed beta, Ubisoft is not including an NDA and is encouraging players to share and stream the beta content. Considering that most fans enjoy getting in-game loot in exchange for participating in beta events, Ubisoft has also promised a free Llama shirt for all beta players to customize their character with once the full game launches in March.

Players already accepted into the beta can start preloading the tactical shooter now in preparation before tomorrow. While not everyone will be able to experience the beta, reports from players have begun to trickle in over not being able to use their unlock code or not receiving invites at all. With a bit more time left before the servers go live, hopefully these issues can be sorted out before the beta officially starts tomorrow.

Are you looking forward to this open world title or are you in wait and see mode before committing? Let us know in the comments below.

Ghost Recon: Wildlands closed beta will be available from February 3 to February 6 for PC, PlayStation 4, and Xbox One. The game then releases in full on March 7, 2017.

 

 

[Source:- GR]

 

Android 7.1.2 Nougat is official, public beta coming later today (Update 2: rolling out now)

Google has just officially announced Android 7.1.2 Nougat, and will begin rolling out the public beta build starting today!

Android 7.1.2 beta will roll out to Pixel, Pixel XL, Nexus 5X, Nexus Player and Pixel C devices who are enrolled in the Android Beta Program starting today, while the company says the Nexus 6P will get the update “soon.”

Of course, you probably shouldn’t expect a ton of new features to come along with this new update. Android 7.1.2 will be an incremental maintenance release focused on refinements, which will include a number of bug fixes, optimizations, and a small number of enhancements for carriers and users. That’s the only description Google gave for this new version of Android, so we’ll have to wait and see what specific changes it brings.

Google says if you’d like to test out this new version of Android ASAP, you should enroll in the Android Beta Program. And like always, if you have an eligible device that’s already enrolled, your device should receive the update in the next few days. If you haven’t enrolled yet, head to this website, opt-in your eligible Android phone or tablet, and that’s it. You’ll receive an OTA in just a few hours. And if you’d rather do things the old fashioned way, you can always download and flash the update manually.

The final, consumer-ready version of 7.1.2 will be released in just a few months for all the devices listed above.

 

[Source:- androidauthority]

 

Big data applications

Richard J Self, Research Fellow – Big Data Lab, University of Derby, examines the role of software testing in the achievement of effective information and corporate governance.

As a reminder, software testing is about both verifying that software meets the specification and also validating that the software system meets the business requirements. Most of the activity of the software testing teams attempts to verify that the code meets the specification. A small amount of validation occurs during user acceptance testing, at which point it is normal to discover many issues where the system does not do what the user needs or wants.

It is only too clear that current approaches to software testing do not, so far, guarantee successful systems development and implementation.

IT project success

The Standish Group have been reporting annually on the success and failure of IT related projects since the original CHAOS report of 1994, using major surveys of projects of all sizes. They use three simple definitions of project successful, failed and challenged projects, as follows:

Project successful:

The project is completed on time and on budget, with all features and functions as initially specified.

Project challenged:

The project is completed and operational but over‑budget, over the time estimate, and offers fewer features and functions than originally specified.

Project failed:

The project is cancelled at some point during the development cycle.

Due to significant disquiet amongst CIOs about the definition of success requiring meeting the contracted functionality in a globalised and rapidly changing world, Standish Group changed the definition in 2013 to:

Project successful:

The project is completed on time and on budget, with a satisfactory result; which is, in some ways, a lower bar.

As the graph in Figure 1 shows, the levels of project success, challenge and failure have remained remarkably stable over time.

It is clear that, as an industry, IT is remarkably unsuccessful in delivering satisfactory products. There is a range of estimates of the resultant costs of challenged and failed projects which range from approximately US$500 billion to US$6 trillion, which compares to the annual ICT spend of US$3 trillion in a world GDP of approximately US$65 trillion.

Clearly something needs to be done.

The list of types of systems and software failures is too long to include here but a few examples include the recent announcements by YAHOO of the loss of between 500 and 700 million sets of personal data in 2012 and 2014, the loss of 75 million sets of personal and financial data by Target in 2012 and regular failures of operating system updates for iOS and Windows etc.

Common themes, verification and validation

Evaluating some of the primary causes of the long list of failures suggests some common themes and causes ranging from incomplete requirements capture, unit testing failures, volume test failures due to using too small an environment and too small sets of data, inappropriate HCI factors and the inability to effectively understand what machine learning is doing.

Using the waterfall process as a way of understanding the fundamentals of what is happening, even in agile and DevOps approaches, we can see that software verification is happening close to the end of the process just before implementation.

As professionals we recognise that there is little effective verification and validation activity happening earlier in the process.

The fundamental question for systems developers is, therefore, whether there is any way that the skills and processes of software testing can be brought forward to earlier stages of the systems development cycle in order to more effectively ensure fully verified and validated requirements specifications, architectures and designs, software, data structures, interfaces, APIs etc.

Impact of big data

As we move into the world of big data and the internet of things, the problems become ever more complex and important. We have the three traditional Vs of big data: velocity, volume and variety which stress the infrastructures, cause problems with ensuring data dictionaries are consistent between the various siloes of databases, the ability to guarantee valid and correct connections between corporate master data and data being found in other databases and social media.

Improved project governance

If the IT industry is to become more successful, stronger information and project governance is required that is based on a holistic approach to the overall project, ensures a more effectively validated requirement specification, far more effectively verified and validated non‑functional requirements, especially in the areas of security by design and the human‑to‑computer interfaces.

It is also vital to ensure that adequate contingencies are added to the project estimates. The 2001 Extreme Chaos report observed that for many of the successful projects, the IT executives took the best estimates multiplied by 2 and added another 50%. This is in direct contrast to most modern projects where the best and most informed estimates are reduced by some large percentage and a ‘challenging target’ is presented to the project team. Inevitably, the result is a challenged or failed project.

If we can achieve more effective project governance, with effective verification and validation of all aspects from the beginning of the project, the rewards are very large in terms of much more successful software that truly meets the needs of all the involved stakeholders.

12 Vs of project governance and big data

One effective approach is to develop a set of questions that can be asked of the various stakeholders, the requirements, the designs, the data, the technologies and the processing logic.

In the field of information security, ISO 27002 provides a very wide range of questions that can help an organisation of any size to identify the most important aspects that need to be solved. By analogy, a set of 12 Vs have been developed at the University of Derby which pose 12 critical questions which can be used both with big data and IoT projects and also for more traditional projects as the ‘12 Vs of IT Project Governance’.

The 12 Vs are:

Volume (size).

Velocity (speed).

Variety (sources/format/type).

Variability (temporal).

Value (what/whom/when?).

Veracity (truth).

Validity (applicable).

Volatility (temporal).

Verbosity (text).

Vulnerability (security/reputation).

Verification (trust/accuracy).

Visualisation (presentation).

As an example, the Value question leads towards topics such as:

Is the project really business focused? What are the questions that can be answered by the project and will they really add value to the organisation and who will get the benefit and what is the benefit? Is it monetary? Is it usability? Is it tangible or intangible?

What is the value that can be found in the data? Is the data of good enough quality?

The Vulnerability question leads towards: Is security designed into the system, or added as an afterthought? Major consequences could result in significant reputation damage.

Incorrect processing leads to reputation damage.

The Veracity question is developed from the observation by J Easton2 that 80% of all data is of uncertain veracity, we cannot be certain which data are correct or incorrect, nor by how much the incorrect data are incorrect (the magnitude of the errors).

Data sourced from social media is of highly uncertain veracity, it is difficult to detect irony, humans lie, change their likes and dislikes, etc. Data from sensor networks suffer from sensor calibration drift of random levels over time, smart device location services using assisted GPS have very variable levels of accuracy. A fundamental question that needs to be asked of all these data, is how can our ETL processes detect the anomalies? A second question is to what extend do undetected errors affect the Value of the analyses and decisions being made?

Formal testing of BI and analytics

One further fundamental issue (identified by the attendees at The Software Testing Conference North 2016)3 was that the formal software testing teams are very infrequently involved in any of the big data analytics projects. The data scientists, apparently, ‘do their own thing’ and the business makes many business critical decisions based on their ‘untested’ work. In one comment, the models developed by the data scientists produced different results depending on the order in which the data were presented, when the result should have been independent of the sequence.

In conclusion, the fundamental challenge to the testing profession is to determine how their skills, knowledge, experience, processes and procedures and be applied earlier in the development lifecycle in order to deliver better validated and verified projects which can be delivered as a ‘successful project’ (in Standish Group terms)? Are there opportunities to ensure more comprehensive and correct requirements specifications?

This article is based on the presentation delivered on the 28th September 2016 at The Software Testing Conference North 2016. Video can be found here. 

This article first appeared in the November 2016 issue of TEST Magazine. Edited for web by Jordan Platt.

 

 

[Source:- softwaretestingnews]

5 psychology rules every UX designer must know

featured_psychology

Experience-based design…if that’s how you define your work as a designer, it might be a good time to reevaluate your approach.

Now, there’s nothing wrong with being an experienced designer; your experience could be an asset! However, it is essential to realize that there are many moving parts in a working design. For example, do you know that you shouldn’t just drastically redesign a website? Or that the color that works on the exact same website (featuring the same thing and in the same niche) will differ if the audience was predominantly male compared to if the audience was predominantly female?

There’s a psychological approach to web design—based on decades of studies and psychology experiments. Below are five psychology-backed UX tips for your next redesign:

1) WEBER’S LAW OF JUST NOTICEABLE DIFFERENCE

Anyone who’s used Facebook over the last 5 years knows that not much has changed in that time. Facebook is a mega corporation worth over $350 billion, so you might expect a lot to have changed in three years. Why is Facebook retaining every key element of its design? The answer to this same question explains why every major website—including Google, Twitter and Amazon, despite their large budgets—do not make drastic redesigns.

It is explained by Weber’s law of just noticeable difference, which states that the slightest change in things won’t result in a noticeable difference; if you’re looking at a bulb, for example, and the light dims or brightens just a bit, you’re unlikely to notice the change—if it brightens significantly, however, you will notice the change. In the same way, if you’re carrying a weight of 100kg, removing 1kg from it is unlikely to make much of a difference in the weight, you’re unlikely to notice it. If you were to remove 10kg from the 100kg weight, however, the difference in weight becomes instantly apparent.

Research shows that we dislike a massive change in existing structures and systems, even if those changes will benefit us, and there is ample evidence that show protests when major websites make massive changes and redesign.

Simply put, Weber’s law coupled with our natural averseness to change shows that the best way to approach a redesign is subtly; make your redesign slow and subtle, changing a little here and there gradually—in such a way that most people won’t even know you’re doing a redesign—until you’ve completely revamped the redesign. Not only will this ensure your design is well accepted by the majority, but a good portion of your audience would have gotten used to your redesign before it is completed and very few will complain.

2) UNDERSTAND THAT WE RESPOND TO COLOR DIFFERENTLY

While we often deeply trust our instinct and experience, it is another thing for them to stand science’s test. For example, do you know that the same design that works for an audience of male readers often won’t work for an audience of female readers—even if it’s for the same website selling the very same products?

One of the most important factors you should consider when redesigning a website is the audience. Are the audience predominantly male or female? This matters a great deal!

Research has found that people will form an opinion about things within 90 seconds, and that color influences up to 90 percent of the opinion people form. The color you use for your design alone can make it a failure or success.

That said, it is important to realize that men and women see colors differently. The graphics below show the colors both men and women like as well as the colors they dislike the most:

3) THE SENSORY ADAPTATION PHENOMENON

Have you ever wondered about why you don’t feel your clothes or shoes? Ever wondered about why, even though you were initially irritated by it, you no longer notice your neighbor’s dog’s constant barking?

This is explained by a psychological phenomenon called “sensory adaptation.” It states that we tend to tune out stimulus if we get repeatedly exposed to it—initially, we find it annoying, but later we just don’t notice it.

Now, how does this relate to web design? It’s simple: you design a website and use the very same color scheme and button color for important parts that you want the user to take action on. Due to the fact that these essential parts blend in with the design color scheme, and that people have been seeing the same color all over your design, people are naturally wired to tune them out—they don’t see the key elements on your page, and you lose out on conversions.

When designing or redesigning a website, it is essential to make your CTAs stand out; if the whole design color scheme is blue, you must not use the color blue for the CTA or to highlight the most important action on the page. Most people believe the color red or orange is the most effective for boosting conversions; it isn’t. A color red button used on a page with red color scheme will convert awfully, but a color green button on the same page will convert much better.

Use something that stands out for essential elements; this way, it doesn’t activate people’s sensory adaptation, and your conversion doesn’t suffer.

4) TYPE: BIGGER IS BETTER!

When it comes to text, designers often obsess over look and appeal: “Wow, should I use a serif?” “That new font looks dope! Let me give it a shot!” Except that psychology shows that, when it comes to design, most of the things we designers give importance to are not what the end users really care about. Why we care about aesthetics and how appealing the latest typeface will make our design appear, the average user cares about basic things like usability.

In essence, the average user cares a lot more about font size than about font type. In fact, research has shown that people want type to be bigger and simpler, and that larger type elicits a strong emotional connection in readers.

In essence, people want simple, large type. Based on data from available research, experts advise not using a font-size lesser than 16px.

5) PERCEPTUAL SET

What you see will differ depending on your experiences; as with the image of the “vase or two faces,” if you’re an artist, especially if you just finished working on a vase, you’re likely to see a vase in the image. If you just left a gathering of lots of people, and if you’ve not seen a vase in months, you’re likely to see two faces.

This phenomenon is explained by the “perceptual set theory,” which explains our tendency to perceive information based on our expectations, existing information and experiences. In essence, people from different cultures are likely to perceive the very same thing differently.

The implication for web designers is that people have certain expectations of web design—some general and some based on certain industries. For example, most people have a certain expectation for where a site’s navigation bar will be (in the header), putting it elsewhere (in the footer, for example) will confuse a lot of users and lead to bad user experience. The same goes for every element of your site design.

It’s good to be innovative. When you’re going to be innovative, however, make sure you include clues to guide people about the new elements. Most importantly, test people’s response to the new elements and readily change anything people do not respond well to.