Jan 31, 2015

Google Wallet gets major support in deal with WePay

I would love to use Google Wallet. Unfortunately, it's rather challenging to find shops, restaurants, and services that support any form of digital pay where I live. They have the technology, they just haven't enabled said technology (for either Google Wallet or Apple Pay). This issue has been a major stumbling block for Google to gain any momentum with their payment system.
However, that will change very soon. Google has announced a deal with WePay that will enable online retailers to make Wallet available. From my perspective, this does two things:
  • Gives Wallet a major boost
  • Makes online shopping more secure
Many online shoppers I know, when given the option, go directly to PayPal. Why? The easiest way to prevent your stolen credit card information from online databases is to not use those cards while shopping. To that end, using PayPal makes perfect sense. But some users aren't so sold on PayPal. I sell products outside of Amazon and, when I do, a good number of consumers don't want to use PayPal. Being able to offer Google Wallet would be a major boon to me as a seller. As a buyer? This is a serious win. Not only will it add yet another payment option, but it also means shopping online with your Android device will become even more secure.
Since I've owned a smartphone, I've refused to make purchases from my mobile devices, especially when I'm on a Wi-Fi network that I don't know can be trusted or on my carrier's network. There's too much room for error and transmitting credit card information across networks. A Google Wallet option is a game changer. I'd no longer hesitate to make purchases on-the-go.
But what about the benefit for Google? Simple. Because of the hesitancy of so many retail outlets to adopt digital payments, this deal with WePay will give Wallet the boost and the "cred" it needs -- to the tune of over 200,000 online retailers and funding platforms (like GoFundMe). Very soon, online shoppers should see Wallet buttons (next to the PayPal and major credit card buttons) popping up.
Google released the Instant Buy API in 2013. PayPal was launched in 1998, so it's had far more time to gain a foothold in the online payment space. But once Google Wallet buttons start appearing, anyone with an Android device would be smart to go with the integrated option.
Some online retailers have already rolled out the Google Wallet option. Newegg, for instance, proudly displays the Buy With Google button alongside PayPal and other credit card options. More online retailers will come on board soon (if they haven't already).
The WePay deal also makes it possible for app developers to make use of Google Wallet for in-app and other purchases. If you have a small business and offer a mobile app to make mobile purchasing easier, you can now roll in a Google Wallet payment option.
Win-win for everyone involved. Period.
It's my hopes that, as more online retailers pick up Google Wallet as a payment options, brick and mortar shops will follow suit -- even in smaller towns. With a growing number of hacks, and the US still way behind in the chip and pin card roll out, having the Google Wallet option makes perfect sense for retailers (both online and offline). And now, when I see that option available online, I'll be more apt to purchase when I'm on my Android smartphone.
If you haven't set up Google Wallet yet, check out my post "How to set up Google Wallet for easy, secure payments" for a step-by-step walkthrough.

Jan 27, 2015

Data spot check: Oh no, could it all be wrong? | Anonymous

When an unstoppable force (faulty software) meets an immovable object (user error), what happens? Tough to say, but you definitely want to take a second look at your sales reports.
When I first started in IT 20-plus years ago, a lot of our programs were developed in-house and we, the developers, were responsible for our own programs and maintaining data integrity. In my observation, over the last decade competition has increased in the business software side where market forces have turned the emphasis onto price control of the package. As a result, many vendors have decreased if not eliminated their QC departments in an effort to drop development costs.
Read this whitepaper to discover best practices that drive brand affinity, repeat business and
Then there's another problem: the users. You'd think it would be important to a business to ensure the data on which they base their decisions and initiatives would be correct. But doing so takes resources and often gets pushed lower on the priority list.

Wading through the data

I recently saw what can happen with lack of oversight from both software vendors and users. My job at the company was IT manager, and one of my duties was to run reports for the execs. In the past, I'd noticed discrepancies in the data entered by our company's sales reps and had made recommendations for putting software governance in place for creating new customer IDs.
However, management frowned about anyone dealing with the sales team and said they didn't see any reason to make changes. But the problems with this philosophy came to light once again as I worked to produce a report for end-of-year sales from our main accounting package.
We'd been running the package for years with many modifications made by the vendor's development team during that time. When we first installed the package, we ran it parallel for several months simply to verify the transactions and data. I'd tried to keep an eye on it ever since, but as the demand for more IT services ballooned along with manpower reduction it became next to impossible to keep doing so. Meanwhile, management wouldn't buy into hiring more worker bees.
I was assembling the data from our four major locations to show sales trends for the prior three years. Our package displayed only one year at a time, and the COO wanted to see all three years on one page ranked by top sales for the most recent year.
The data had to be scrubbed as some locations had used multiple IDs for the same customer. The sales team apparently would choose whichever number struck their fancy that day to enter a sale. My mind whirred as I thought someone should have noticed this; then again, I'd often been amazed at the accounting department's lack of data integrity enforcement. I made note to once again bring it to management's attention.
Once I had the data plugged into my structure, I produced a report that I hoped would meet the COO's demands. Before passing it along, I perused it to see if I could visually spot any anomalies. As I scanned down the page I saw a customer that had $0.00 purchases three years prior, $350,345 purchases two years prior, then $0.00 in the last year.

Run a report and check it twice

Thinking it odd, I checked the numbers I'd been given, and it reflected the data in my report. I fired up the accounting software and went to the customer maintenance module where I could view one customer at a time and eight years of sales history. The numbers for the last two years matched the numbers in the report I'd run. But the ones from three years prior was drastically different: Instead of $0.00, this history showed $256,312.
My heart was in my throat. How could this be? I reran the reports that I had been given, and they again matched my report. I could not see the coded back end of the form that delivered the additional sales onto the maintenance form, but because of my experience in development I was pretty sure of what was happening.
I made a quick call to the vendor's tech line, and they flushed the buffers correcting the form. Now everything matched for the report I was creating. But I wondered if I could trust the numbers. Were they really correct? How many unseen buffers or fields had not been flushed in other situations? What other errors lurked beneath the surface, undetected?
We can't do the volume of transactions without software, and we have to trust to some degree. Hopefully, the errors that are missed remain minor.

Jan 21, 2015

Anxiety Medications May Be Tied to Alzheimer's Risk – WebMD

Older adults, who habitually use sedatives for anxiety or insomnia, may have a heightened risk of developing Alzheimer's disease, a new study suggests.
The drugs in question are benzodiazepines, a widely prescribed group of sedatives that include lorazepam (Ativan), diazepam (Valium) and alprazolam (Xanax). Older adults commonly take the drugs for anxiety or insomnia, often long-term, according to background information in the study.
That's despite the fact that guidelines call for only short-term use of the drugs, at most. In 2012, the American Geriatrics Society (AGS) put benzodiazepines on its list of drugs considered "potentially inappropriate" for seniors, because of risks like confusion, dizziness and falls.
The current study isn't the first to link benzodiazepines to Alzheimer's risk, but it adds to evidence that longer-term use of the drugs -- beyond three months -- might be a risk factor, according to lead researcher Sophie Billioti de Gage, a Ph.D. candidate at the University of Bordeaux, in France.
"For people needing or using benzodiazepines, it seems crucial to encourage physicians to carefully balance the benefits and risks when renewing the prescription," Billioti de Gage said.
But the study was only able to find an association between the drugs and Alzheimer's risk. It wasn't designed to definitively prove that the drugs caused the memory-robbing condition, according to geriatrics specialist Dr. Gisele Wolf-Klein, who was not involved in the research.
One reason is that the findings are based on prescription records. "We know the drugs were prescribed, but we don't know how often people took them, or if they took them at all," said Wolf-Klein, director of geriatric education at North Shore-LIJ Health System in New Hyde Park, N.Y.
Regardless, she said, benzodiazepines have enough known risks to warrant concern.
"There is absolutely no doubt these drugs have dangerous side effects," Wolf-Klein said. "It's important for people to understand that they can be addictive, and increase the risk of confusion and falls."

Unintended Consequences: The Concise Encyclopedia of Economics | Library of Economics and Liberty

The law of unintended consequences, often cited but rarely defined, is that actions of people—and especially of government—always have effects that are unanticipated or unintended. Economists and other social scientists have heeded its power for centuries; for just as long, politicians and popular opinion have largely ignored it.
The concept of unintended consequences is one of the building blocks of economics. Adam Smith’s “invisible hand,” the most famous metaphor in social science, is an example of a positive unintended consequence. Smith maintained that each individual, seeking only his own gain, “is led by an invisible hand to promote an end which was no part of his intention,” that end being the public interest. “It is not from the benevolence of the butcher, or the baker, that we expect our dinner,” Smith wrote, “but from regard to their own self interest.”
Most often, however, the law of unintended consequences illuminates the perverse unanticipated effects of legislation and regulation. In 1692, the English philosopher John Locke, a forerunner of modern economists, urged the defeat of a parliamentary bill designed to cut the maximum permissible rate of interest from 6 percent to 4 percent. Locke argued that instead of benefiting borrowers, as intended, it would hurt them. People would find ways to circumvent the law, with the costs of circumvention borne by borrowers. To the extent the law was obeyed, Locke concluded, the chief results would be less available credit and a redistribution of income away from “widows, orphans and all those who have their estates in money.”
In the first half of the nineteenth century, the famous French economic journalist Frédéric Bastiat often distinguished in his writing between the “seen” and the “unseen.” The seen were the obvious visible consequences of an action or policy. The unseen were the less obvious, and often unintended, consequences. In his famous essay “What Is Seen and What Is Not Seen,” Bastiat wrote:
There is only one difference between a bad economist and a good one: the bad economist confines himself to the visible effect; the good economist takes into account both the effect that can be seen and those effects that must be foreseen.1
Bastiat applied his analysis to a wide range of issues, including trade barriers, taxes, and government spending.
The first and most complete analysis of the concept of unintended consequences was done in 1936 by the American sociologist Robert K. Merton. In an influential article titled “The Unanticipated Consequences of Purposive Social Action,” Merton identified five sources of unanticipated consequences. The first two—and the most pervasive—were “ignorance” and “error.”
Merton labeled the third source the “imperious immediacy of interest.” By that he was referring to instances in which someone wants the intended consequence of an action so much that he purposefully chooses to ignore any unintended effects. (That type of willful ignorance is very different from true ignorance.) The Food and Drug Administration, for example, creates enormously destructive unintended consequences with its regulation of pharmaceutical drugs. By requiring that drugs be not only safe but efficacious for a particular use, as it has done since 1962, the FDA has slowed down by years the introduction of each drug. An unintended consequence is that many people die or suffer who would have been able to live or thrive. This consequence, however, has been so well documented that the regulators and legislators now foresee it but accept it.
“Basic values” was Merton’s fourth source of unintended consequences. The Protestant ethic of hard work and asceticism, he wrote, “paradoxically leads to its own decline through the accumulation of wealth and possessions.” His final case was the “self-defeating prediction.” Here he was referring to the instances when the public prediction of a social development proves false precisely because the prediction changes the course of history. For example, the warnings earlier in this century, that population growth would lead to mass starvation, helped spur scientific breakthroughs in agricultural productivity that have since made it unlikely that the gloomy prophesy will come true. Merton later developed the flip side of this idea, coining the phrase “the self-fulfilling prophesy.” In a footnote to the 1936 article, he vowed to write a book devoted to the history and analysis of unanticipated consequences. Although Merton worked on the book over the next sixty years, it remained uncompleted when he died in 2003 at age ninety-two.
The law of unintended consequences provides the basis for many criticisms of government programs. As the critics see it, unintended consequences can add so much to the costs of some programs that they make the programs unwise even if they achieve their stated goals. For instance, the U.S. government has imposed quotas on imports of steel in order to protect steel companies and steelworkers from lower-priced competition. The quotas do help steel companies. But they also make less of the cheap steel available to U.S. automakers. As a result, the automakers have to pay more for steel than their foreign competitors do. So a policy, that protects one industry from foreign competition, makes it harder for another industry to compete with imports.
Similarly, Social Security has helped alleviate poverty among senior citizens. Many economists argue, however, that it has carried a cost that goes beyond the payroll taxes levied on workers and employers. Martin Feldstein and others maintain that today’s workers save less for their old age because they know they will receive Social Security checks when they retire. If Feldstein and the others are correct, it means that less savings are available, less investment takes place, and the economy and wages grow more slowly than they would without Social Security.
The law of unintended consequences is at work always and everywhere. People outraged about high prices of plywood in areas devastated by hurricanes, for example, may advocate price controls to keep the prices closer to usual levels. An unintended consequence is that suppliers of plywood from outside the region, who would have been willing to supplyplywood quickly at the higher market price, are less willing to do so at the government-controlled price. Thus results a shortage of a good where it is badly needed. Government licensing of electricians, to take another example, keeps the supply of electricians below what it would otherwise be, and thus keeps the price of electricians’ services higher than otherwise. One unintended consequence is that people sometimes do their own electrical work, and, occasionally, one of these amateurs is electrocuted.
One final sobering example is the case of the Exxon Valdezoil spill in 1989. Afterward, many coastal states enacted laws placing unlimited liability on tanker operators. As a result, the Royal Dutch/Shell group, one of the world’s biggest oil companies, began hiring independent ships to deliver oil to the United States instead of using its own forty-six-tanker fleet. Oil specialists fretted that other reputable shippers would flee as well rather than face such unquantifiable risk, leaving the field to fly-by-night tanker operators with leaky ships and iffy insurance. Thus, the probability of spills probably increased and the likelihood of collecting damages probably decreased as a consequence of the new laws.

About the Author

Rob Norton is an author and consultant and was previously the economics editor of Fortune magazine.

Further Reading

Bastiat, Frédéric. “What Is Seen and What Is Not Seen.” Online at:http://www.econlib.org/library/Bastiat/basEss1.html.
Hayek, Friedrich A. New Studies in Philosophy, Politics, Economics and the History of Ideas. Chicago: University of Chicago Press, 1978.
Merton, Robert K. Sociological Ambivalence and Other Essays. New York: Free Press, 1976.


Switzerland, volatility and unexpected consequences | Stephen Bartholomeusz

The extraordinary surge in the Swiss franc overnight as Switzerland’s central bank abandoned its three-year-old peg against the euro is a dramatic illustration of the unintended and potentially unpleasant consequences of unconventional monetary policies.
The Swiss franc initially rocketed almost 40 per cent against the euro before settling at a still jaw-dropping one-day move of 18 per cent against the euro. That was after the Swiss National Bank announced it was abandoning the cap of 1.2 euros it had set in September 2011.
The original policy came at a time when the eurozone was in crisis and there were severe doubts about the solvency of a number of European governments and their banks.
From early 2010 until the SNB unveiled its response, the Swiss currency had appreciated a staggering 44 per cent against the euro as European investors and individuals fled the euro for their traditional safe haven. Not surprisingly, given the impact that was having on the competitiveness of Swiss industry, there was enormous pressure on the SNB to do something and it did.
Since 2011 the bank has been printing francs to buy euros. It has printed a lot of francs, with Switzerland’s foreign exchange reserves rising by about SFr240 billion since the policy started.
The Swiss, however, now find themselves on the cusp of being caught between divergent monetary policies of the eurozone and the US, a divergence that would almost inevitably see the SNB having to increase the scale of its sales of SFrs and purchases of euros exponentially.
The European Central Bank is about to launch its own version of quantitative easing -- perhaps as early as next week -- even as the US is expected to start normalising its monetary policies sometime this year, having ended its $US3.5 trillion program of bond and mortgage purchases last October.
The ECB program, which could see its balance sheet expand by about €1 trillion over time, will see it printing money in order to buy sovereign bonds and will drive the Euro down against the US dollar and other stronger currencies.
The SNB self-evidently believed that it couldn’t stand against the imminent tide of inflows into the SFr by trying to continue to keep a cap on the value of its currency relative to the euro and that it was a better and less costly strategy to pre-empt the ECB’s action with its own rather than be forced to do so later by the scale of inflows and at far greater cost.
The SNB did try to moderate the impact of the move by reducing its primary interest rate by 50 basis points to minus 0.75 per cent to try to deter safe haven inflows but whether that will have any impact on investors and companies fleeing the euro or rouble is questionable.
The SNB’s move blind-sided the markets -- and other central banks. The extent of the initial appreciation of 39 per cent against both the euro and the US dollar was an obvious signal that there were very substantial short positions out against the SFr (US investors alone held short positions of more than $US2.5 billion) and that there was a mad, panicked scramble to cover them. The waves that the SNB decision sent through international currency, equity and bond markets could be a preview of more volatility and unexpected policymaking to come.
The disciplined and conservative Swiss have, in effect, been innocent victims of, first the reckless behaviour of their eurozone neighbours, and now the latest desperate attempt to prevent the eurozone from sliding further into deflation and lengthy recession. Swiss exporters have had their competitiveness decimated in an instant.
We’ve had a taste of what that’s like, with the QE programs run by the US and Japan post-crisis keeping the Australian dollar up at levels that haven’t enabled the currency to reflect and ease the massive adjustments needed to rebalance activity after the bursting of the commodity price bubble.
With the eurozone about to initiate its own QE experiment the normalisation of global monetary policy and currency settings is still a long way off, despite the likelihood that the US will start raising interest rates from near-zero levels either mid-year or in the second half of the year.
The US Federal Reserve Board may be influenced by the extent to which the US dollar appreciates against the other major currencies, particularly the euro, and therefore it threatens to reduce the rate of recovery in the US economy.
There are some reservations about how effective a eurozone QE program might be in improving European competitiveness and in sparking modest levels of inflation.
Unlike the US, where companies -- including relatively small organisations -- are funded by markets, funding in the eurozone if primarily via bank channels and therefore the ECB doesn’t have the ability to inject liquidity directly into business in the way that the Fed has.
The ECB, and other central banks, are also going to be confronted by the tide of deflation occurring as a result of the collapse in the oil price in particular and commodities hard and soft in general.
The extent to which the Fed’s program has contributed to the gradual recovery in the US economy is unclear. Japan’s version of QE is struggling to generate any significant momentum in its economy. There is a continuing debate about the real effectiveness of the unconventional policies that have been trialled since the financial crisis.
What is beyond dispute, however, is that when the major economies implement unconventional policies on a grand scale they do have unexpected and potentially unpleasant consequences for third party investors and economies, as the Swiss have now discovered.
With the three major developed economies entering quite different phases of their post-crisis experiences, the potential for more shocks, stresses and volatility in markets and real economies is arguably rising more than six years after the crisis rather than receding.

The best computer security advice you'll get | Roger A. Grimes

I couldn't put my finger on what was nagging at me the last few months. When I finally sorted it out, it was the realization that most computer security advice is an absolute waste of time -- and most of what isn't is barely useful.
Even I'm guilty. Statements I've spouted in the past, like using long and complex passwords or hardening your computer system, don't really deliver much value. Disable weak password hashes? That was good advice 15 years ago. Use an up-to-date antivirus program? If that worked, we would have solved the problem decades ago.
When I look at the data of how people and computers are compromised, those previous recommendations didn't effectively address the attack vectors that make malicious hackers so successful. Instead of giving you dozens to hundreds of truly ineffective recommendations, I'm going to give you a few basic defenses that really work.
Forget every past computer security advice you've ever read -- even from me. This is the real deal. Everything else is wasted cycles.

Patch the most popular software first

If you look at how most computers are compromised, it's through unpatched software. Usually, the exploited unpatched software is the popular software used by everyone. Today, client-side, Oracle Java leads the pack, followed by Adobe Flash and Acrobat Reader. Server-side it's unpatched admin or remote access tools. The most popular programs change over time. What doesn't change is that those programs are the ones most often exploited.Bank robbers rob banks because that's where the money is. Malicious hackers and malware concentrate on exploiting the most popular programs because those are the ones most likely to be on the computers they want to compromise.
You're going to get far more bang for your buck by patching the most commonly exploited programs and doing that perfectly than patching almost all of your programs with less rigor (which is the case in most organizations). If you can't patch or mitigate the most exploited programs, the rest of your efforts aren't worth much.

Don't get socially engineered

Social engineering is a fancy name for a con, accomplished over the phone, via email, or on the Web, where the con artist manages to extract some vital piece of information or convince the victim to install malware. The only way to guard against social engineering is to keep your user training up to date to combat the most prevalent threats, which most companies fail to do.
Test your employees, and if you can successfully socially engineer them, do a better job of education. If you have an excellent user education program and employees still fail the test, redouble your efforts.

Make sure your user education material tells people they're more likely to be exploited by trusted websites than strange or new websites. Tell users not to be tricked into installing new programs. Let them know that popular, free software, is often full of unwanted programs and malware (you can't even trust CNET's Download.com). 

Two-factor authentication has its benefits

Although the security of 2FA (two-factor authentication) is often oversold, its effectiveness often depends on which risks you think you're mitigating. For example, 2FA can't stop most of today's APTs (advanced persistent threats) once they have full control of your PC -- but 2FA is great for preventing phishing attacks (which often precede the ultimate compromise).
If you can be strict enough to allow only 2FA when users log on to company resources, then there's no logon name and password combination to steal. When the fake phishing email arrives asking for the user's logon credentials -- sorry, bad guy, you're out of luck. This works well only if you use 2FA everywhere on the corporate network, and you don't need a logon name and password for some websites.

Don't use the same passwords across systems or websites

After phishing, the most common way hackers obtain your password is from other systems and sites. Many users have been successfully phished for their Facebook or Twitter logon and the attackers use the same password for the user's corporate logon. It works all the time.
Make sure your corporate passwords never match any password you use off the corporate network -- and don't use the same passwords on multiple websites. Even on the corporate network, local admin and service/daemon accounts should never share passwords on different systems -- it allows a credential theft attacker to leverage a single compromise into a network-wide compromise in minutes. Not sharing local passwords is one of the best measures you can take to slow down attackers and minimize the damage.

Don't have permanent members in your highest elevated groups

Malicious hackers always escalate their privileges to obtain the highest security credentials in the network. Once they have those, it's game over. Want to frustrate a hacker? Don't have any permanent members of any elevated group, and monitor and alert on unexpected member additions. There are ways around this defense, but most hackers are stymied when their go-to methodologies fail. Frustrate a hacker today!

Put your event monitoring on a diet

If  you're collecting a bazillion events a day, you're doing it wrong. Instead, focus on defining only events that indicate maliciousness, and only alert on those. Everything else is trying to find needles in a haystack. If you want to know what events to monitor, email me.

Network traffic analysis is a godsend

Today's attackers gain a regular user's credentials, then begin moving around the network accessing servers and sites the user's logon credentials can access. Or they are using memory-only resident software that's really hard to detect. But no matter what they use, bad guys move around networks in illegitimate ways. Use a network flow analysis tool, define what is normal, and alert on the abnormal.

Whitelisting works better than antimalware

If everyone used a whitelisting application control program it would make everyone's life easier. Whitelisting programs can prevent previously undefined programs from executing. That's a terrific way to stop previously unknown malware. But even if you can't use it in enforcement mode, turn on your application control program in audit-only mode. Then you can alert on and respond to new suspicious programs without interrupting normal operations.

Focus on how, not what

Lastly, learn how badness breaks into your network and put less focus on names. The name of the malware program on an exploited computer isn't nearly as useful as how it got in (through unpatched software, social engineering, and so on). Learn those modalities and focus on mitigating those types of threats; then you have a real computer security defense plan in the works.
After every major public hacking attack, I read article after article offering absolutely useless advice. Those writers aren't thought leaders. They are parroting the unoriginal, unsupported dogma they've read. They haven't spent years looking at the data and interacting with hacked customer after hacked customer. I have. This advice is the real deal. Follow it, and you'll be better off than anyone else.