|
I've set up an Azure Logic App to pass the body of emails received in an Outlook 365 inbox to an Azure HTTP-triggered function, similar to the set up shown here: Call Azure Functions from workflows - Azure Logic Apps | Microsoft Learn, although I am sending the email Body, not the "From" or "Received time" shown in the example. However, the logic app doesn't seem to be sending anything, as the HTTPRequest object received by the function is null. Any ideas what I've done wrong?
The code of the logic app:
{
"definition": {
"$schema": "https://schema.management.azure.com/providers/Microsoft.Logic/schemas/2016-06-01/workflowdefinition.json#",
"actions": {
"Pass_Email_Body_to_Function_MoveitEmailProcessor": {
"inputs": {
"body": {
"emailBody": "@triggerBody()?['body']"
},
"function": {
"id": "/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.Web/sites/my-azure-function-app/functions/myHttpTriggeredFunction"
}
},
"runAfter": {},
"type": "Function"
}
},
"contentVersion": "1.0.0.0",
"outputs": {},
"parameters": {
"$connections": {
"defaultValue": {},
"type": "Object"
}
},
"triggers": {
"When_a_new_email_arrives_(V3)": {
"inputs": {
"fetch": {
"method": "get",
"pathTemplate": {
"template": "/v3/Mail/OnNewEmail"
},
"queries": {
"folderPath": "Inbox",
"from": "myemail@myemail.com"
}
},
"host": {
"connection": {
"name": "@parameters('$connections')['office365_1']['connectionId']"
}
},
"subscribe": {
"body": {
"NotificationUrl": "@{listCallbackUrl()}"
},
"method": "post",
"pathTemplate": {
"template": "/GraphMailSubscriptionPoke/$subscriptions"
},
"queries": {
"folderPath": "Inbox"
}
}
},
"splitOn": "@triggerBody()?['value']",
"type": "ApiConnectionNotification"
}
}
},
"parameters": {
"$connections": {
"value": {
"office365_1": {
"connectionId": "/subscriptions/<my-subscription-id>/resourceGroups/<my-resource-group>/providers/Microsoft.Web/connections/office365-1",
"connectionName": "office365-1",
"id": "/subscriptions/<my-subscription-id>/providers/Microsoft.Web/locations/eastus/managedApis/office365"
}
}
}
}
}
And the Azure Function (v4):
[Function("myHttpTriggeredFunction")]
public IActionResult Run([HttpTrigger(AuthorizationLevel.Function, "post")] HttpRequest req)
{
string emailBody = "";
try
{
_logger.LogInformation("myHttpTriggeredFunction function is processing request.");
if (req is not null)
{
var emailStream = req.Body;
using (var reader = new StreamReader(emailStream))
{
emailBody = reader.ReadToEnd();
_logger.LogInformation($"Request body: {emailBody}");
}
}
else
{
_logger.LogInformation("HttpRequest is null");
}
}
catch (Exception ex)
{
_logger.LogError($"{ex.Message} {ex.StackTrace}");
}
return new OkObjectResult($"Processed email");
}
There are no solutions, only trade-offs. - Thomas Sowell
A day can really slip by when you're deliberately avoiding what you're supposed to do. - Calvin (Bill Watterson, Calvin & Hobbes)
modified 17-Jun-24 16:49pm.
|
|
|
|
|
Found the issue.
For anyone who runs across this or similar problem with an HTTP-Triggered Azure function in the future it may be because you are running the function in isolated worker mode. If so you must change the footprint of the function to look for an HttpRequestData object rather than an HttpRequest object in the Run parameters and return a HttpResponseData object like this:
[Function(nameof(myHttpTriggeredFunction))]
public static HttpResponseData Run([HttpTrigger(AuthorizationLevel.Anonymous, "post")] HttpRequestData req, FunctionContext executionContext)
{
}
There are no solutions, only trade-offs. - Thomas Sowell
A day can really slip by when you're deliberately avoiding what you're supposed to do. - Calvin (Bill Watterson, Calvin & Hobbes)
|
|
|
|
|
Hi everyone, I hope you’re having a nice day today! I’ve tried reading some articles on how to set up minikube and I’m wondering where to find some basic resources. I’ve looked here Getting Started with Kubernetes: A Comprehensive Minikube Guide - Codabase Bytes[^] which was somewhat helpful but also found the basic Minikube docs here minikube start[^]
Any and all advice for running minikube locally and learning about the container part of it would be super useful!
Thanks everyone, have a nice weekend and a happy St Pattys!
|
|
|
|
|
I have tried this twice.
I have created a VM in Microsoft Azure for Tiki Wiki (it is available from the Azure Portal as 'Tiki Wiki CMS Groupware packaged by Bitnami '). It comes up successfully with its Login form. However, nowhere in the build is there anywhere to specify an Admin username / password for the Wiki, so I cannot create a 'normal' user. The help pages (e.g. ) tik-wiki-login-help[^] suggest using the 'I don’t know my Admin Username or Email address ' link, but that is not there. The alternative is to use cPanel (using instructions at How to Login to cPanel – InMotion Hosting Support Center[^] but that does not work either. The only username / password that I have had to specify when creating was (presumably) for the root user; but, as this is a SaaS offering, I cannot log on to the VM.
Any clues would be appreciated! Thanks in advance.
|
|
|
|
|
I can only suppose you are referring to a problem with tiki-wiki and thus it really has nothing to do with cloud computing.
Only thing I would suggest is that before diving into cloud VMs you should run the application(s) locally first so you understand exactly how to install it, configure it and use it.
Briefly looking I see that from the install process for that product it describes how to create the admin user during the install process on Step 10.
Installing TikiWiki manually | InMotion Hosting[^]
If that does not seem to be the problem then maybe you could explain your problem in a different way to make it clear what it is.
|
|
|
|
|
Thanks for the response - my post is from 15 months ago. I abandoned TikiWiki and used DocuWiki instead. DocuWiki is a good, relatively easy to use, product (and free from BitNami on Azure). Unfortunately, the management did not like it - it looked too old fashioned for them and they decided to go with a Teams / SharePoint solution. I do not know how they are progressing with that as they decided that my time with them had finished.
|
|
|
|
|
The short answer appears to be no.
Background: I am a SignalR/web newbie. My goal is to learn a bit about both these. So I created a chat hub based on the Tutorial: Real-time chat with SignalR 2 | Microsoft Docs. It worked nicely in VS2017. Then I spent many hours fruitlessly trying to get it to run under IIS on an AWS/LightSail VPS. I kept thinking the source of my problem was my ignorance. Turns out, to paraphrase Arthur Conan Doyle (of Sherlock Holmes fame) “Once you eliminate all the possible causes, whatever remains, no matter how improbable, must be the truth.” That's the long explanation to the short answer above!
Appears as though Microsoft will not allow AWS to host SignalR, as it perhaps does not want competition (for Teams and such) from Amazon!
Hopefully this will save someone a similar painful experience. (BTW: I have since learned that AWS has introduced AWS Websockets API. If you are interested, you might want to take a look at this nice article Which is best? WebSockets or SignalR - Dotnet Playbook.)
|
|
|
|
|
|
MS Azure SignalR Service doesn't have in-built support for other serverless platforms. Like AWS Lambda or Google Cloud Functions.
|
|
|
|
|
I'm new to the subject of CloudSim and I've run some of the existing examples in the CloudSim package (the 8 existing examples...).
I want to implement an optimal SFC allocation scenario with CloudSim. This scenario will include definition of cost function to allocate the virtual network functions (included in the service function chain) optimally on the network nodes with processing resources (CPUs).
Moreover there will be constraints for the aforementioned cost function. And this optimization problem could be solved through different heuristic methods such as genetic algorithm and so on.
I know that there's a class named DatacenterBroker that must be extended in order to be able to create my own algorithm for submitting Cloudlets to the defined virtual machines. I'm also aware that in order to create new algorithms to place the defined VMs in the hosts (in a Datacenter) a class named VmAllocationPolicy (which is defined as an abstract class) has to be extended. But I want to find out what algorithm is used in this class by default so I get some ideas on how to propose new algorithms. But since it is an abstract class I can't see what's inside its methods. So what do I have to do about it?
Secondly I Want to know if it is possible to consider Datacenters as nodes of the network and define links with specific bandwidths and latencies between those Datacenters. And how am I going to include my optimization code in the scenario? How to define an algorithm for routing between these Datacenters after the aforementioned placement has taken place? How to define the ingress node and egress nodes in the scenario?
Or does anyone have any idea if implementing such scenario is possible in mininet environment? how?
thanks
|
|
|
|
|
Everyone knows the three largest cloud storage platforms. But I am a little discouraged in the choice. AWS and Google Cloud Platform have equal pros and cons. Who had a choice situation? Please help me to understand.
|
|
|
|
|
Depends on what you want to do?
- Want a developer-friendly cloud with good documentation/community support? Go with Azure.
- Want a consumer-friendly cloud with everything ready as-an-API? Go with Google Cloud Platform.
- Want a cloud platform in the Asia-pacific market? Go with Alibaba Cloud (sometimes I feel like it is Azure but for Asian markets).
- Want just a hosting platform and a database set? Go with DigitalOcean.
- Want to get stuck reading documentation/creating support tickets/learning that the issue you are facing in 2020 was reported back in 2015 but the platform decided never to solve it and moved on with marketing and sales campaign? Go with AWS.
I hope this helps.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
|
Sure!
Azure: Developer-friendly with strong support.
Google Cloud Platform: Consumer-friendly with extensive APIs.
Alibaba Cloud: Targeting the Asia-Pacific market.
DigitalOcean: Straightforward hosting and databases.
AWS: Extensive but potentially challenging documentation and support.
|
|
|
|
|
I do Azure because I have been a Microsoft developer forever and so it's easiest for me and Microsoft is awesome.
Social Media - A platform that makes it easier for the crazies to find each other.
Everyone is born right handed. Only the strongest overcome it.
Fight for left-handed rights and hand equality.
|
|
|
|
|
Everything in the technology world comes in a full package. There will be pros and cons all along.
Whether you should be bothered about the cons or not, it is all down to the usage. May I know what kind of business application you are talking about?
|
|
|
|
|
From my experience with both Azure and AWS, I found Azure better than AWS as Microsoft carry well architected Hybrid solutions and security offerings than others.
|
|
|
|
|
I am learning more about Artificial intelligence these days so I am working on AWS. It is smooth and efficient.
|
|
|
|
|
It's really hard to tell anything without knowing what are your requirements. For instance, for my pet project I've chosen oracle cloud as its free tier allows me to have multiple VM up to 24Gb RAM in total and since I don't anticipate big loads having 12Gb VMs with load balancer for free is quite neat.
In case you're serving a static site CDNs might be a better option since your code will be deployed to multiple edges.
Also, the cloud is pretty hot but its use cases are the ability to scale dynamically and eliminating the ops. Otherwise you just might consider one big server[^]
|
|
|
|
|
So, I've "played" with Azure a couple of times in the past, and found it MASSIVELY confusing - vast number of different services, huge number of (wildly varied) pricing options, dozens of pages of configuration, huge amount of new terminology. Really not into that, I'm a developer at heart, I like writing code.
I have a client with a critical line-of-business ASP.Net Webforms website (not web application), plus a separate sub-domain for testing. He would like to move this away from his cheap-as-chips shared hosting, and onto Azure. (Two different hosting providers keep messing us around by changing configurations that break the site).
So, is there a SIMPLE migration tool to help me move this site into Azure? A complication is that it uses MySql so we need a MySql service on Azure too (with 2 separate databases, 1 prod 1 UAT). The MS Azure site has reams of stuff about migrating, but it looks like it assumes you control a dedicated server. Doesn't necessarily need to be a tool, a comprehensive tutorial / walkthrough would be fine, BUT it would need to be up-to-date (When tinkering with Azure a couple of years ago I found not only was it complex, it was constantly shifting too!).
Also anyone else's experiences with this migration path would be welcome, as I will need to give the client an estimate of my time on this migration. And once done, I will still need to connect from my existing d/b client (e.g. Heidi SQL, MySql Workbench) to the databases, and be able to EASILY upload changed pages, access IIS logs etc. I did at one point get the app running on Azure (linking back to the existing hosted MySql d/b) so I know it will run with minimal changes, just finding the whole thing pretty daunting right now.
|
|
|
|
|
Quote: critical line-of-business ASP.Net Webforms website (not web application) Web Forms is a web application, and should be just fine on Microsoft Azure.
ASP.NET Web Deployment using Visual Studio: Introduction | Microsoft Docs
Quote: plus a separate sub-domain for testing. Microsoft Azure provides this built-in to the platform; Deployment Slots.
Quote: is there a SIMPLE migration tool to help me move this site into Azure? Depends, if you can connect to the MySQL database locally and create a backup, Microsoft might have some tools to deploy the MySQL snapshot to Azure platform.
Quote: A complication is that it uses MySql so we need a MySql service on Azure too (with 2 separate databases, 1 prod 1 UAT). I recommend using Terraform to create the infrastructures; dev and UAT. The UAT environment can be created when test is needed and then disposed when not needed.
Quote: (When tinkering with Azure a couple of years ago I found not only was it complex, it was constantly shifting too!). Sadly it still is. But if you can get a subscription for MySQL engine on Azure, running the backup/restore should be just fine.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
Many thanks for your response. Afzaal Ahmad Zeeshan wrote: Web Forms is a web application, and should be just fine on Microsoft Azure. Nope. ASP.Net webforms allows you to create two types of web site - a "Web Application", which involves pre-compiling code into DLLs (all done by VisualStudio) and you implement the DLLs and .ASPX pages; and "Web Site", where the runtime does the compilation of each code-behind page when first called. It's therefore critical that Azure includes those runtime components or my site would not run as-is.
Afzaal Ahmad Zeeshan wrote: Microsoft Azure provides this built-in to the platform; Deployment Slots. But only at the higher-end packages. Because the entire slot is "swapped" there's no way during the swap to also swap from UAT to production database if using a standard connection string. Also the IIS InstanceID presumably changes during the swap, so there is no way for an end-user to determine which environment they're looking at on-screen. (I use the InstanceID to make CSS changes, including adding environment name, to reduce the risk of making "test" transactions in the production environment; see IIS_Environment_Detection[^]
A quick scout around just the "Deployment slots" documentation just now confirms Azure remains obscure and overly-complex for "normal" website applications.
|
|
|
|
|
Quote: Nope. ASP.Net webforms allows you to create two types of web site - a "Web Application", which involves pre-compiling code into DLLs (all done by VisualStudio) and you implement the DLLs and .ASPX pages; and "Web Site", where the runtime does the compilation of each code-behind page when first called. It's therefore critical that Azure includes those runtime components or my site would not run as-is. Right, thanks for the information. From what I "think", Visual Studio should be able to handle this situation.
The sh*t I complain about
It's like there ain't a cloud in the sky and it's raining out - Eminem
~! Firewall !~
|
|
|
|
|
We currently have an intranet portal which uses IMAP to read email from a shared mailbox on Office365, using the excellent MailKit[^] library.
I've just been made aware that Microsoft is going to disable basic authentication for IMAP clients. Originally, this was going to happen in October, but it's since been pushed back to the second half of 2021.
Basic Auth and Exchange Online – February 2020 Update - Microsoft Tech Community - 1191282[^]
After some panicked reading and experimentation (and much swearing), I've managed to get the MSAL[^] library to return an OAuth2 access token:
const string ClientId = "...";
const string Username = "...";
SecureString password = ...;
var scopes = new[] { "https://outlook.office365.com/.default" };
var app = PublicClientApplicationBuilder.Create(ClientId).WithAuthority(AadAuthorityAudience.AzureAdMultipleOrgs).Build();
var tokenResult = await app.AcquireTokenByUsernamePassword(scopes, Username, password).ExecuteAsync(cancellationToken); After configuring the application as a "public client" in the Azure portal, giving it Mail.ReadWriteAll and Mail.SendAll permissions, and granting admin consent for my organisation, this code now returns a seemingly-valid access token.
According to the author of the MailKit library[^], all I need to do now is use the token to authenticate:
using var client = new ImapClient();
await client.ConnectAsync("outlook.office365.com", 993, SecureSocketOptions.Auto, cancellationToken);
await client.AuthenticateAsync(new SaslMechanismOAuth2(Username, tokenResult.AccessToken), cancellationToken); Unfortunately, that simply throws an "authentication failed" exception.
MailKit.Security.AuthenticationException
HResult=0x80131500
Message=Authentication failed.
Source=MailKit
StackTrace:
at MailKit.Net.Imap.ImapClient.<AuthenticateAsync>d__81.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable.ConfiguredTaskAwaiter.GetResult()
at TestOffice365OAuth.Program.<MainAsync>d__1.MoveNext()
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.TaskAwaiter`1.GetResult()
at TestOffice365OAuth.Program.Main() I don't know whether this is a bug in Microsoft's implementation, a bug in MailKit, a configuration error with my application, or a mistake in my code.
Has anyone managed to get Office365 IMAP access working with OAuth2? Can you spot anything I've missed?
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
modified 28-Apr-20 6:21am.
|
|
|
|
|
Thanks to "paulflo150" on GitHub, I was able to authenticate with the token by changing the scopes to:
var scopes = new[] { "https://outlook.office365.com/IMAP.AccessAsUser.All" }; Now I need to find out how to connect to a shared mailbox. The usual trick of appending "\shared-mailbox-alias" to the username results in the same "authentication failed" error, and if I authenticate without it there are no "shared namespaces" available.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|