Search This Blog

Showing posts with label wtf. Show all posts
Showing posts with label wtf. Show all posts

Wednesday, February 10, 2010

...Now What?

A few weeks ago my parents' desktop died. This was my old Athlon XP computer I had before I bought the one I have now (about a year and a half ago). A bit of trivial diagnostics demonstrated that either the CPU or motherboard had died, and I didn't have the parts necessary to determine which. So, my dad went looking on eBay for a replacement motherboard and CPU to go with the existing memory, as they didn't really have the money to buy a new computer right now (actually, I was the one who did most of the looking at the specs, to make sure it would work).

A few days ago, the motherboard+CPU arrived, and I quickly set to determining whether or not they worked, as the amount of time to return it was limited. As my past experiences have made me a little paranoid in that respect, naturally the first thing I did was run MemTest 86 on it.

While it seemed to work for the most part, about once every pass (took a couple hours) on test #7, an error would show up. After leaving it running overnight, some patterns emerged. The error was always the 0x02000000 bit getting set when it should be cleared, and it usually occurred when the value written to the memory was 0xFDxxxxxx (less commonly, 0xF9xxxxxx). The base address of the error seemed to always be at 0x3C, 0x7C, 0xBC, or 0xFC. The errors also appeared to cluster in the low 512 megs of the memory space.

In short, it appeared to be 1 bit every 64 bytes, where the containing byte changes from 11111x01 to 11111x11.

Separately, neither of the two 512-meg DIMMs shows this problem, regardless of which of the two sockets the DIMM is in (as tested by running MemTest 86 overnight, which would be expected to produce about a dozen errors). However, when put in in one order, this error occurs. Peculiarly, when put in the opposite order, MemTest begins spewing out memory errors before soon crashing (presumably due to memory errors in the locations the program is at).

So, where is the problem? Well, I wish I knew the answer to that.

The fact that both DIMMs worked correctly when alone suggests that the DIMMs are good (these are the same ones that were used in the computer that failed, and to my knowledge there was nothing wrong with them then). The fact that each DIMM works in both sockets suggests that it's not a bad socket. The fact that it's every 64 bytes tells me that it's not one of the data lines on the DIMMs or on the motherboard, as the data width is 64 bits (64 pins). The fact that the error period isn't into the kilobytes indicates that it's not a CPU cache failure, as I saw in the past.

64 bytes is the size of the cache line for the CPU, which seems to hint that it's somewhere after where the 64-bit memory reads are assembled into whole cache lines; unfortunately, I don't know where exactly this occurs, to narrow down the possibilities. Yet the fact that it only seems to occur in the lowest 512 megs of the memory space is very suspicious, and seems to suggest something with one of the DIMMs, sockets, or data paths on the motherboard, which appear to already be ruled out from earlier tests.

So, I'm really at a loss as to what to make of this and what to do next, and I've got three days left to do it. I can't test the memory on another computer, because I don't have another computer to test it on. The BIOS could use updating, which could theoretically fix the problem, but I've been reluctant to do that because it's a trivial way to say "You broke it, I'm not giving you a refund!"

I suppose I need to try mixing and matching the CPUs and motherboards. I need to do that anyway, to find out exactly what broke on the old computer, but it should also tell me whether the problem is in the CPU or motherboard (though nothing more specific than that).

Saturday, January 23, 2010

EPIC FAIL

I got a letter from Wells Fargo bank with a new credit card, today. Nothing out of the ordinary in that respect, but the extent Wells Fargo failed in the process is pretty remarkable.

The first page was all the stuff you'd expect - credit card glued on, as well as account number and other info, and instructions to use it. The second page is shown below (the back is completely blank):



Looks completely innocuous and generic, right? Except for that black block in the lower right that I filled in. There is a bar-code and, in 6 point font, a number of numbers (most that I don't recognize), including the credit card number itself. If you weren't looking for it, you would most certainly never have seen it.

The number of people who have thrown that page away without ever realizing it had their credit card number on it is surely uncountable. Funny how a page about guarding against fraud sets you up perfectly for fraud by printing entirely unnecessary sensitive information on a completely generic page (that would have been cheaper to print without that information). You have to wonder if there's some malicious intent, there.

So, after the facepalm, I go online to activate the new card. After logging in and going to the activation page on Wells Fargo's (secure) web site, I'm met by this page:



Birth date? Work phone number? Really? It's been known for a long time that "security questions" are major security vulnerabilities, but this may just set a new record as to extent.

I can't say I can trust Wells Fargo after that rather brilliant display of insecurity.

Tuesday, January 19, 2010

What the Goat

The National Weather Service has issued a tornado warning for the South Los Angeles, Long Beach and Whittier [right about here] areas as a powerful new storm moves ashore.
Tornado Warning Issued for South Los Angeles, Long Beach, and Whittier

A tornado watch. In Los Angeles county. Has that ever happened before?

Tuesday, January 05, 2010

Friday, June 05, 2009

& the Real World



Martial arts in a skirt, eh? I thought that kind of thing only happened in anime.

(found via http://www.darkroastedblend.com/2007/01/anti-us-north-korean-posters.html)

Friday, April 03, 2009

Name That Movie

See if you can recognize this one.

A bill written by big business interests is proposed in French parliament. Hugely controversial, the bill is opposed by most apart from the business interests that wrote it, and it is difficult to see how it could pass (although you can never underestimate the corruption of government officials). As the deadline for the vote draws near, an intense debate lasting 42 hours straight breaks out among parliament members.

Eventually, late Thursday evening, it is decided that the debate should cease and parliament members should go home for the night, and the bill would be voted on the next week. So, parliament members do exactly that. After about 98% of parliament members have left the building, the vote is called early, at nearly 11 PM on Thursday night. With 16 members remaining, the bill passes, 12 to 4.

Can you name that movie?

Actually you can't, because it actually happened - yesterday. This is the French Three-Strikes law, which promises to disconnect people from the internet on allegation of copyright infringement, without ever having to present evidence in court or even tell the accused what copyright they are thought to have infringed. It will also require running spyware on all computers that constantly talks to government systems and monitors activity and the state of your network.

Welcome to French democracy, proving that America really isn't that bad afterall.

Thursday, March 26, 2009

Die, .NET. Thanks.

So, just encountered an (extremely) evil quirk of the .NET platform in a bug.

Everyone who programs .NET knows that one key difference between the two is that structs are (without ref specified) always passed by value, while classes are passed by reference. Apparently that rule is not limited to actual passing of structs, themselves; passing a "pointer" to a callback function for an instance of a struct causes a copy of the entire struct to be passed, and the callback is then called on that copy, not your original instance.

Example:
system.FindCollisions(collisionSet.OnPossibleCollision, workingSet);

In this line, FindCollisions receives a local copy of collisionSet. When it then calls that callback function, that callback operates on the local copy, not on collisionSet itself.

I'm not sure whether this is by design or whether it's a bug. While it's consistent with the policy of always passing structs by value, the fact that it's so counter-intuitive makes me wonder if it might not be a bug.

Tuesday, December 16, 2008

Intriguing

Well, just as school is almost over (finals are this week) and I don't have a job lined up yet, substantial amounts of amusement will be welcome in the near future (especially given how bleak the anime outlook is, this season...). Well, as it turns out, I'm in luck! While in the process of banging my head against a wall till I pass out while working on a term project, something amusing happened. I don't have time to explain the details now (despite the fact that this is much more interesting than my school project), but here's a short headline of what's up and coming: Q vs. Scam Debt Collection Agency.

Look forward to it!

Tuesday, November 04, 2008

& the Audio Driver Incident

Several months ago, I (finally) upgraded my computer. My old one was a 1.8 GHz Athlon XP (single 32-bit core) with 1.25 gigs RAM and a GeForce 3; in other words, it was 2002 or 2003 hardware. My new computer is a 2.4 GHz Core 2 (quad 64-bit cores) with 4 gigs RAM and a Radeon 4850; depending on the benchmarks, my new CPU is 10-18x as fast as my old one, if you count all 4 cores. After trying various voodoo to try to get my old XP installation to run on my new computer (despite the fact that it wouldn't have been able to use about a gig of my RAM), I ultimately gave up and installed Windows Server 2008 64-bit. After dealing with a whole bunch of problems getting various stuff working with 64-bit 2008, things ultimately ended up being acceptable, and I've used that ever since.

However, a couple relatively minor problems have been pretty long-standing, and continued until a few days ago. One was easy to diagnose: Firefox was leaking memory like heck. For every day I left my computer on, Firefox would grow in RAM usage by a couple hundred megs, getting up to a good 2 gigs on occasion (I usually kill it before it gets to that point). While this was certainly an annoyance, it wasn't much of a problem, as I have 4 gigs memory, and I can simply restart it to reclaim all the leaked memory whenever it gets so large it becomes a problem.

One was much harder to diagnose, however. Something else was leaking memory in addition to Firefox, and it was not clear what was causing this. Total system memory usage would increase over days, and if you ignored Firefox, would end up using up all of my 4 gigs memory by about 2 weeks since the last reboot. Unlike with Firefox, there was no apparent problem - no single process was showing a significant accumulation of memory, nor were excess processes being created, leaving 1-2 gigs of memory I couldn't account for. So, I went several months without knowing what the problem was, usually handling it by restarting my computer every week or so.

Then, one day my dad called me from work to ask me why his computer at work was sometimes performing poorly. So I had him look through the process list and system statistics and look for memory leaks, excessive CPU usage, etc. As I don't have the exact terminology used on those pages memorized, I also opened up the listing on my computer to be sure I told him to look for the right things.

This brought something very curious to my attention: the total handle count for my computer was over 4 million. This is a VERY large number of handles; normally computers don't have more than 20-50k handles at a time - 2 orders of magnitude less than what my computer was experiencing. This was an almost certain indication that something was leaking handles on a massive scale. After adding the handles column to the process list, I found that audiodg.exe was the process with some 98% of those handles. Some looking online revealed that that process is a host for audio driver components and DRM. Some further looking for audiodg.exe and handle leaks found some reverse-engineering by one person that showed that this was due to the Sonic Focus audio driver for my Asus motherboard leaking registry handles.

Fortunately, there was an updated driver available by this time that addresses the issue. As my computer was currently at 96% RAM usage (the worst it's ever been - usually I reboot it before it gets to this point), I immediately installed the driver and restarted the audio services (of which audiodg.exe is one). This resulted in a shocking instant 1.3 gig drop in kernel memory usage to less than 400 megs total. It's been one and a half days since then, and audiodg.exe currently is using 226 handles, suggesting that the problem is either dead or drastically reduced (it has increased by like 70 handles in those 1.5 days); and even if it is still leaking handles, 50 handles a day is a tolorable leakage, as that's only like 10 k/day.

So, this whole thing revealed that Windows is quote robust. Given that most computers never go above 50k handles, I was very surprised that Windows was able to handle 6.6 million handles (the highest I've ever seen it get to) without falling over and dying (although this wouldn't have been possible with a 32-bit version of Windows, as that 1.7 gigs of kernel memory wouldn't have fit in the 2 gig kernel address space after memory-mapped devices have memory space allocated). Traditionally, Unix has had a limit of 1024 file handles per process, though I don't know what's typical these days (I know at least some Unix OS have that as a configurable option).

After pursuing that problem to its conclusion, I decided to do some more looking for handle leaks in other processes. While the average process used only 200-500 handles, a number a processes (which are not abnormally high) get as high as 2k handles. However, one process - smc.exe, a part of Symantec Antivirus - has almost 50 k handles allocated, making it a good candidate for a handle leak. Looking at the process in Process Explorer shows that a good 95% of these handles are of the same type - specifically, unnamed event handles - providing further evidence in support of handle leakage. That's as far as I've gotten so far; I haven't spent much time investigating the problem, or looking for an analysis online (though the brief searches I did didn't find anything related to this). So, that's work for the future.

Sunday, August 10, 2008

Gah

Random fact of the day: Microsoft Developer Network library no longer gives information about what versions of Windows prior to 2000 support some functions. For example, MSDN does not list support for CreateFile in any versions of Windows prior to 2000, despite it being in every single 32-bit version of Windows (Windows 95 and NT 3.1 onward).

Saturday, July 12, 2008

Epic Fail

So, on Friday I got a new computer. The computer consists of a quad-core Core 2 CPU, 4 gigs of memory, and a Radeon HD 4850 based video card. Although there are some known techniques for getting an existing Windows installation to work in a new computer, this install simply refused to work with the USB ports on this computer (the computer freezes up several seconds after Windows has booted; disabling the USB ports in the BIOS allows it to work, but is not an acceptable solution). So, I ultimately ended up reinstalling Windows.

I had quite a few options when it came to choosing a version of Windows. Thanks to my obsessive downloading of everything on MSDN Academic Alliance, I have legal copies of Windows 2000, Windows XP x86, Windows XP x64, Vista x86 & x64, two copies of Windows Server 2003, and Windows Server 2008 x86 & x64. For those not familiar with the Servers, 2003 is an updated server version of XP, and 2008 is an updated server version of Vista.

As Server 2008 is an updated version of Vista with additional features (and the newest of any version), I figured I'd use that, and that's what I'm writing on right now. However, this install may be short-lived. As it turns out, just about nothing works on Server 2008. In the last three hours I've encountered the following:
- The Asus motherboard driver installer for Vista x64 will not run. When run, it says "Does not support this Operating System: WNT_6.0I_64". If I understand this correctly, it's saying it doesn't support Windows NT 6.0 x64. This is curious, as this is exactly what Vista x64 is, suggesting that the installer does not run on the system it was made for. Furthermore, several pieces of motherboard hardware do not have drivers included with Server 2008, and so appear as Unknown Devices and PCI Devices (there are still a couple unknown devices left if you manually install each driver). Epic Asus fail.
- The other major driver I needed was the 4850 driver. This was especially important because the 4850 has a known issue where the fan speed stays too low, resulting in hot temperatures. So, I downloaded the latest version of the drivers and ATI Catalyst programs from the video card manufacturer (as best I can tell the ATI web site doesn't list drivers for the 4850) and installed the driver and program. Installation had no problems; running the Catalyst Control Center, however, resulted in the message "The Catalyst Control Center is not supported by the driver version of your enabled graphics adapter.". Very curious, considering that driver and the Control Center came bundled in the same ZIP file. Epic ATI fail.
- One of the programs I use most of all (by far) is Windows Live Messenger. Naturally I soon needed to install it on this computer. The Windows installer even helpfully created a Windows Live Messenger Download link in my start menu. Unfortunately, following the link, downloading the program, and double-clicking it (I'm not even mentioning the UAC and IE annoyances) brought up the error message "Sorry, Windows Live programs cannot be installed on Windows Server, Windows XP Professional x64 Edition, or Windows operating systems earlier than Windows XP Service Pack 2". By process of elimination, this appears to say that only supports XP x86 SP2+, Vista x86, and Vista x64; curious, given the fact that Microsoft advertises support for Server 2008. Epic Microsoft fail.
- The other program I use most often is FireFox. So, that was next on the list. Download, install, so far so good. Launching FireFox, however, is a completely different story: instant crash. Epic FireFox fail.
- And just for good measure, this install has blue-screened once so far (in about 3 hours), with the PAGE_FAULT_IN_NONPAGED_AREA bugcheck. I'm not sure exactly whose failure this is, but the Asus driver problems seem the most likely suspect. Epic fail.

Thursday, May 29, 2008

Absolutely Amazing

Since it would take quite a patchwork of quotes to summarize this story, I'll just give a few bullet-points of my own as a summary.
- Revision3 uses BitTorrent to distribute its own content (legal distribution, in other words)
- Everybody's second-favorite company MediaDefender decided to play with R3's tracker. Once they found a hole that allowed the tracker to serve torrents not by R3, they began using the tracker to track their own files.
- R3 discovered that somebody was using their tracker for external content and banned MD's torrents
- MD's servers (the ones actually uploading the files that they were using R3's tracker to track) responded by DOSing R3's tracker (according to one person on Slashdot, MD has a 9 Gbps internet connection for this purpose), taking R3's tracker and other systems completely offline
- The FBI is currently investigating the incident. Some have suggested and are praying that the PATRIOT Act could be used to charge MD with cyber-terrorism, as defined by law.

Various coverage:
Inside the Attack that Crippled Revision3 (mirror)
MediaDefender, Revision3, screw-up
Revision3 Sends FBI after MediaDefender

Friday, April 18, 2008

Novel Method of Attack

Looks like the RIAA has just undertaken a novel new campaign against P2P.
"Are MP3s doing permanent damage to your ears?"

- sound bite in a commercial for the news, on an upcoming story. I had to stop what I was doing for a moment to convince myself that I hadn't misheard it.

Sunday, April 06, 2008

For the Love of Kaity...

Recently, it has been observed that Comcast is disrupting TCP connections using forged TCP reset (RST) packets [1]. These reset packets were originally targeted at TCP connections associated with the BitTorrent file-sharing protocol. However, Comcast has stated that they are transitioning to a more "protocol neutral" traffic shaping approach [2]. We have recently observed this shift in policy, and have collected network traffic traces to demonstrate the behavior of their traffic shaping. In particular, we are able (during peak usage times) to synthetically generate a relatively large number of TCP reset packets aimed at any new TCP connection regardless of the application-level protocol. Surprisingly, this traffic shaping even disrupts normal web browsing and e-mail applications.

New traffic shaping can disrupt a Comcast Internet connection

I think I hear the entire Comcast tech support department committing seppuku.

From a technical standpoint, there are few options that are more idiotic than sending reset packets to kill BitTorrent connections, as Comcast was doing previously. They just did one: killing ALL connections in that manner. This option is so bad, in fact, that it leads me to seriously consider the possibility that Comcast is doing this intentionally to teach the FCC a lesson: that its resetting BT connections wasn't so bad. What could/should Comcast have done differently? Let's look at a few possibilities.

The best option (excluding improving their infrastructure) would be to monitor the amount of traffic going through each modem, and if a disproportionately large amount is coming from one modem, tell the modem to limit traffic rate for that one modem. Unfortunately, I'm told Comcast modems do not have the ability to change maximum speed without a reboot, so this isn't really possible.

Failing that, there is an extremely simple yet efficient method: simply start randomly dropping packets when there's congestion. While this may sound like a sarcastic suggestion, it's not. The TCP protocol uses two pieces of information to determine how fast to send data: the amount of buffer space the receiver has (used to prevent the sender on a fast connection from flooding a receiver on a slow connection), and dropped packets. If a dropped packet is detected, TCP lowers its send rate. And if BT is a large proportion of traffic, randomly dropping traffic will result is a larger number of BT connections getting throttled than non-BT connections.

Thus this is a simple and easy way to effectively lower the amount of data being sent in a way biased against heavy bandwidth users, without interrupting any of the connections. This works even better if their Sandvine hardware can detect BT traffic and selectively drop packets only from BT traffic. As long as they weren't dropping so many packets that BT slowed to a crawl, I wouldn't mind my ISP doing this.

Saturday, March 08, 2008

Google Epic Fail

As the following indicates, Google sucks at Japanese translation.

"Work with everyone involved to meet the time that I talk directly to the image more than anything to receive a lot of opportunities, so those places will eventually work on the float's theme song has become melody It is disappointing that many of the Yes."

Monday, January 14, 2008

Gah

So, I'm randomly reading reviews of new anime series this season (two episodes in, now; if you want a list of them, check out Random Curiosity's). There were a couple that sounded interesting or different enough to consider, but I was still reading reviews on them and others. A summary of the episode, from the review of episode 2 of Shigofumi on the same blog, sums up what made the decision a lot harder (I was originally considering watching that series):
It might be a little premature for me to say this, but out of all the new shows I’ve seen so far this winter season, Shigofumi has been my favorite. There’s just something about the way the plot continues to surprise me with themes that are a lot more mature than I was expecting, from multiple instances of death to child pornography.

Viva Japan.

On a tangentially related note, a philosophical question occurred to me: what would anime writers use as a medium for delivery of ecchi if group bathing wasn't such a feature of Japanese culture?

Tuesday, January 08, 2008

Come Again?

If you're unfortunate enough, you might have noticed that Windows will sometimes automatically delete files such as MP3s when you try to open them, particularly after receiving them over MSN Messenger. I just had that happen to me, but it wasn't the first time I've seen it. Following the link to help, after Windows notifies you that it has unilaterally decided to delete (and already has deleted) the file, supplies this information:
Sending and reading e-mail is one of the most popular activities on the Internet. The widespread use of this technology, however, makes it a primary way for computer viruses to spread. Because viruses and other security threats are often contained in e-mail attachments, Microsoft Windows XP Service Pack 2 (SP2) helps protect your computer by blocking e-mail attachments that might be harmful.

In most cases, Windows XP SP2 will block files that have the potential to harm your computer if they come to you through e-mail or other communication programs. Windows will block these files if your program is running in a strong security mode. Most files that contain script or code that could run without your permission will be blocked. Some common examples of this file type are those with file names that end in .exe, .bat, and .js.

Blocking these files is very important to do, since directly opening files of this type poses a risk to your computer and personal data.
This is just baiting the Slashdot crowd. Is there a known but unfixed (major) security vulnerability in Windows Media Player that allows a malicious MP3 to execute script or executable code just by being listened to? Did the RIAA play a part in this design decision?

Saturday, August 25, 2007

Thanks (not)

Got my monthly notice that my monthly tuition payment was due, a couple days ago.

Statement Date [this is the date printed on it, not the date it was actually received]: 08/17/07
Payment Due Date: 08/14/07

Talk about helpful. Makes you wonder why they bothered to send it at all; could have saved some on postage.

Sunday, August 12, 2007

& Debates - Quantum Physics

So, I had a random thought that started a debate thread on Star Alliance (remember that from way, way back?). And boy is it a whopper. Registration on the forum is required to participate, so I'll copy some of the bigger posts in the debate here.

The opening post:
So, I had a random thought; that's, of course, rarely a good thing. Now, let's see if this can turn into a full debate. *ahem* Have you ever considered that some of the most puzzling aspects of quantum physics could be logically explained by the universe being a computer simulation? Let's go over a couple examples.

- As best we can tell, mass, distance, and time all appear to be quantized; that is, they're integer values. Any computer constructed by a physics system remotely like ours is only capable of representing quantized values.
- One of the hardest to grasp concepts in quantum physics is that variables associated with things, particularly subatomic particles, don't appear to have values assigned until that variable is actually used, and values can even be lost once they are assigned. When those variables do not have values assigned, they are represented simply by probability distributions, with the actual value chosen randomly when it is needed. There's a saying in computer science (one of those things that you should be careful not to take too absolutely) - never store anything you can recalculate later; quantum physics appears to take this one step further, not bothering to store anything that you don't need at the moment. The point, of course, to massively reduce the amount of memory required for the simulation by only storing essential values.

Input?
My most recent post:
I myself have considered that the Planck length and Planck time could be a limit to the universe's "resolution".
Of course. As far as we know at the moment, the Planck constants are the resolution of the universe.
If the universe was a huge simulation, would everyone else be part of the simulation, or separate entities within the simulation, or all we all part of the simulation without any free will.
Unknown. There does appear to be full simulation of individual units (unknown exactly what that is; protons/neutrons/electrons/photons, quarks, etc.) in some cases, but it's possible there are optimizations to process clusters of those as a group. Perhaps the fact that particles also behave like waves is a trick to allow the behavior of particles to be calculated in bulk at once, using simpler computations. There are other examples, as well. Like some things cells do seem odd, in that it doesn't entirely seem like all the atoms in a cell are working independently; that in some things the cell or part of a cell seems like it's acting as an single unit.

Of course there are counter-examples, as well. If the simulation was abstracting as much as possible, it's strange that Brownian motion would exist, as it indicates individual atoms are being processed in a case where it would be logical to abstract them into a group.
Quote
Are we all bots on a counterstrike server, is just one of us a bot, or are there no bots?
There would be at least three basic possibilities, which could be combined:
- We are being controlled by players, either directly (e.g. an FPS or RPG), or indirectly (the player is able to manipulate things like basic predispositions, though actual actions are a result of physical processes using those basic predispositions; think like Populus). Possible, but seems less likely to me than the other two.
- We are entirely naturalistic constructions. That is, there is nothing different about us than anything else in the simulated universe; we are nothing more than a result of the laws of physics being simulated. This is the atheist/naturalist world view.
- We are programmed constructs - AIs. We have programs which operate independently but within the constraints of the laws of physics.

But holy crap. Now THAT is an interesting idea. Obviously you could call the programmer(s) God (in the monotheistic, omnipotent sense). But it's also possible that different programmers and/or players (if such a thing exists) could form an entire pantheon. In that case, it's entirely possible that every god that has ever been worshiped throughout history actual does exist (or existed at one time; it's possible gods "die off" as players/programmers lose interest in playing them).

Sunday, July 22, 2007

Middle East Loses What Little Was Left of Its Sanity

A few weeks ago, 14 squirrels equipped with espionage systems of foreign intelligence services were captured by [Iranian] intelligence forces along the country's borders. These trained squirrels, each of which weighed just over 700 grams, were released on the borders of the country for intelligence and espionage purposes. According to the announcement made by Iranian intelligence officials, alert police officials caught these squirrels before they could carry out any task.
- Iranian newspaper

Word spread among the populace that UK troops had introduced strange man-eating, bear-like beasts into the area to sow panic.

But several of the creatures, caught and killed by local farmers, have been identified by experts as honey badgers.
...
UK military spokesman Major Mike Shearer said: "We can categorically state that we have not released man-eating badgers into the area.
- Iraqi rumor

"We can categorically state that we have not released man-eating badgers into the area."
That is just the most godly quote ever.


Aww, aren't they cute? You'd never know they're man-eating badgers of death!