Yick, this thread is going to hell in a handbasket -- sorry I haven't been following it more closely. I'll respond first regarding the discussion with qnn:
Really the best option at this point is just to order a montly license and use it for up to 14 days, then cancell it for a refund, this will only take $20 usd or less depending on your type of license, wich will be refund later.
Just to amend Roger's point here, this is NOT the recommended course of action nor would I suggest that our clients should ever spend more money to fix a problem on OUR end.
Unfortunately there is nothing else we can do about this at the moment.
I will need to look into this specific case as I'm not sure why a comment like this would need to be made... the problem is on our end so we certainly should have been able to do something about it -- and as best I can see, we DID resolve it, this same day.
Then got a new package for trial (#150801) and proceeded to install the new key in the server. Got this message:
**** WARNING ****
License validation failed. Daemon will reject all connections until a valid license is installed
**** WARNING ****
That just means your trial key wasn't installed properly and is not related to any other issues you're experiencing. A situation we commonly see is one in which a client purchases a trial or a second license and assumes it will be applied to an existing server automatically. That is not, of course, how it works -- if you purchase another license, in most cases you're using it on a separate server and wouldn't want to overwrite your existing server's key. You have to install the new key on your server before it'll work.
Not only you got it wrong in the customer service/business side of things, but also are clueless about how Paypal's dispute resolution system works to make such a bold statement.
No, this part of what Roger said is correct -- if we receive a chargeback/dispute for an account it will go into suspension, so that's a bad idea.
Due to your lack of response and solution, we had no other choice but to look for another more efficient provider.
I have to admit I'm a bit puzzled at this, although perhaps I've missed some critical bit of information. To explain, though: as far as I can see, you posted the above comment on September 12th, 3 days after opening ticket YA3-TLE3-449. But a member of our staff responded to ticket YA3-TLE3-449 on September 9 at 5:46PM, the same day you opened the ticket (within about 4hr of opening it), and explained that your license problem had been fixed.
That is the only ticket I see on your account, so I'm not sure what "lack of response and solution" you're referring to... as best I can see, we did respond and resolve your issue on the very same day you opened the ticket...?
At least make one last thing right and refund us the payment for this last month.
We can certainly do that, although you'd need to open a ticket with the helpdesk to allow us to process this.
If your personnel is very limited, then hire more people to do things right.
That was an awkwardly-phrased comment by Roger. We have plenty of staff; what Roger meant was that the number of staff we employ *who have access to our PayPal and merchant accounts* is limited. Which is true; we don't give that access arbitrarily to all of our staff, both for security and privacy reasons.
Should we care about your problems with this poorly planned hosting change when you haven't even notified us ahead of time to prevent this fiasco? Get a grip on reality.
It sounds like you're not aware of the scope of what happened last month. Our datacenter brought us down hard -- their negligence ranged from causing major data loss on one of our servers, to null-routing a number of our IP addresses for no quantifiable reason during our recovery efforts, to completely ignoring their support SLA and keeping us waiting for days on end in some cases for simple status updates. This (not the migration) was the root cause for the issues you experienced. We had no choice but to perform an emergency migration (entirely unplanned) to another DC because the folks at our original DC apparently lost their minds overnight. We're now in a new facility and working on deploying a failover infrastructure at another, separate DC as well to ensure that nothing like this ever happens again.
In any case, that is not by any means an attempt to pin the blame elsewhere -- it was our fault (my fault, specifically) for choosing what ended up (despite reasonable reviews) being an awful DC -- but I thought it was important to explain that we didn't just randomly and carelessly decide to perform a massive infrastructure migration without notifying our clients. It was purely a reaction to the situation at hand and was not at all something we had hoped to have to do.